From 5428a97e3c9578b79fa7b30b6c53f2ae9759f418 Mon Sep 17 00:00:00 2001 From: bene-ges Date: Fri, 2 Jun 2023 18:44:18 +0300 Subject: [PATCH] Spellchecking ASR customization model (#6179) * bug fixes Signed-off-by: Alexandra Antonova * fix bugs, add preparation and evaluation scripts, add readme Signed-off-by: Alexandra Antonova * small fixes Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add real coverage calculation, small fixes, more debug information Signed-off-by: Alexandra Antonova * add option to pass a filelist and output folder - to handle inference from multiple input files Signed-off-by: Alexandra Antonova * added preprocessing for yago wikipedia articles - finding yago entities and their subphrases Signed-off-by: Alexandra Antonova * yago wiki preprocessing, sampling, pseudonormalization Signed-off-by: Alexandra Antonova * more scripts for preparation of training examples Signed-off-by: Alexandra Antonova * bug fixes Signed-off-by: Alexandra Antonova * add some alphabet checks Signed-off-by: Alexandra Antonova * add bert on subwords, concatenate it to bert on characters Signed-off-by: Alexandra Antonova * add calculation of character_pos_to_subword_pos Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * pdb Signed-off-by: Alexandra Antonova * tensor join bug fix Signed-off-by: Alexandra Antonova * double hidden_size in classifier Signed-off-by: Alexandra Antonova * pdb Signed-off-by: Alexandra Antonova * default index value 0 instead of -1 because index cannot be negative Signed-off-by: Alexandra Antonova * pad index value 0 instead of -1 because index cannot be negative Signed-off-by: Alexandra Antonova * remove pdb Signed-off-by: Alexandra Antonova * fix bugs, add creation of tarred dataset Signed-off-by: Alexandra Antonova * add possibility to change sequence len at inference Signed-off-by: Alexandra Antonova * change sampling of dummy candidates at inference, add candidate info file Signed-off-by: Alexandra Antonova * fix import Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * update transcription now uses info Signed-off-by: Alexandra Antonova * write path Signed-off-by: Alexandra Antonova * 1. add tarred dataset support(untested). 2. fix bug with ban_ngrams in indexing Signed-off-by: Alexandra Antonova * skip short_sent if no real candidates Signed-off-by: Alexandra Antonova * fix import Signed-off-by: Alexandra Antonova * add braceexpand Signed-off-by: Alexandra Antonova * fixes Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * fix bug in np.ones Signed-off-by: Alexandra Antonova * fix bug in collate Signed-off-by: Alexandra Antonova * change tensor type to long because of error in torch.gather Signed-off-by: Alexandra Antonova * fix for empty spans tensor Signed-off-by: Alexandra Antonova * same fixes in _collate_fn for tarred dataset Signed-off-by: Alexandra Antonova * fix bug from previous commit Signed-off-by: Alexandra Antonova * change int types to be shorter to minimize tar size Signed-off-by: Alexandra Antonova * refactoring of datasets and inference Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * tar by 100k examples, small fixes Signed-off-by: Alexandra Antonova * small fixes, add analytics script Signed-off-by: Alexandra Antonova * Add functions for dynamic programming comparison to get best path by ngrams Signed-off-by: Alexandra Antonova * fixes Signed-off-by: Alexandra Antonova * small fix Signed-off-by: Alexandra Antonova * fixes to support testing on SPGISpeech Signed-off-by: Alexandra Antonova * add preprocessing for userlibri Signed-off-by: Alexandra Antonova * some refactoring Signed-off-by: Alexandra Antonova * some refactoring Signed-off-by: Alexandra Antonova * move some functions to utils to reuse from other project Signed-off-by: Alexandra Antonova * move some functions to utils to reuse from other project Signed-off-by: Alexandra Antonova * move some functions to utils to reuse from other project Signed-off-by: Alexandra Antonova * small refactoring before pr. Add bash-scripts reproducing evaluation Signed-off-by: Alexandra Antonova * style fix Signed-off-by: Alexandra Antonova * small fixes in inference Signed-off-by: Alexandra Antonova * bug fix - didn't move window on last symbol Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bug - shuffle was before truncation of sorted candidates Signed-off-by: Alexandra Antonova * refactoring, fix some bugs Signed-off-by: Alexandra Antonova * variour fixes. Add word_indices at inference Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add candidate positions Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Move data preparation and evaluation to other repo Signed-off-by: Alexandra Antonova * add infer_reproduce_paper. Refactoring Signed-off-by: Alexandra Antonova * refactor inference using fragment indices Signed-off-by: Alexandra Antonova * add some helper functions Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bug with parameters order Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bugs Signed-off-by: Alexandra Antonova * refactoring, fix bug Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add multiple variants of adjusting start/end positions Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * more fixes Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add unit tests, other fixes Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix Signed-off-by: Alexandra Antonova * fix CodeQl warnings Signed-off-by: Alexandra Antonova * bug fixes Signed-off-by: Alexandra Antonova * fix bugs, add preparation and evaluation scripts, add readme Signed-off-by: Alexandra Antonova * small fixes Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add real coverage calculation, small fixes, more debug information Signed-off-by: Alexandra Antonova * add option to pass a filelist and output folder - to handle inference from multiple input files Signed-off-by: Alexandra Antonova * added preprocessing for yago wikipedia articles - finding yago entities and their subphrases Signed-off-by: Alexandra Antonova * yago wiki preprocessing, sampling, pseudonormalization Signed-off-by: Alexandra Antonova * more scripts for preparation of training examples Signed-off-by: Alexandra Antonova * bug fixes Signed-off-by: Alexandra Antonova * add some alphabet checks Signed-off-by: Alexandra Antonova * add bert on subwords, concatenate it to bert on characters Signed-off-by: Alexandra Antonova * add calculation of character_pos_to_subword_pos Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * pdb Signed-off-by: Alexandra Antonova * tensor join bug fix Signed-off-by: Alexandra Antonova * double hidden_size in classifier Signed-off-by: Alexandra Antonova * pdb Signed-off-by: Alexandra Antonova * default index value 0 instead of -1 because index cannot be negative Signed-off-by: Alexandra Antonova * pad index value 0 instead of -1 because index cannot be negative Signed-off-by: Alexandra Antonova * remove pdb Signed-off-by: Alexandra Antonova * fix bugs, add creation of tarred dataset Signed-off-by: Alexandra Antonova * add possibility to change sequence len at inference Signed-off-by: Alexandra Antonova * change sampling of dummy candidates at inference, add candidate info file Signed-off-by: Alexandra Antonova * fix import Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * update transcription now uses info Signed-off-by: Alexandra Antonova * write path Signed-off-by: Alexandra Antonova * 1. add tarred dataset support(untested). 2. fix bug with ban_ngrams in indexing Signed-off-by: Alexandra Antonova * skip short_sent if no real candidates Signed-off-by: Alexandra Antonova * fix import Signed-off-by: Alexandra Antonova * add braceexpand Signed-off-by: Alexandra Antonova * fixes Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * fix bug in np.ones Signed-off-by: Alexandra Antonova * fix bug in collate Signed-off-by: Alexandra Antonova * change tensor type to long because of error in torch.gather Signed-off-by: Alexandra Antonova * fix for empty spans tensor Signed-off-by: Alexandra Antonova * same fixes in _collate_fn for tarred dataset Signed-off-by: Alexandra Antonova * fix bug from previous commit Signed-off-by: Alexandra Antonova * change int types to be shorter to minimize tar size Signed-off-by: Alexandra Antonova * refactoring of datasets and inference Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * bug fix Signed-off-by: Alexandra Antonova * tar by 100k examples, small fixes Signed-off-by: Alexandra Antonova * small fixes, add analytics script Signed-off-by: Alexandra Antonova * Add functions for dynamic programming comparison to get best path by ngrams Signed-off-by: Alexandra Antonova * fixes Signed-off-by: Alexandra Antonova * small fix Signed-off-by: Alexandra Antonova * fixes to support testing on SPGISpeech Signed-off-by: Alexandra Antonova * add preprocessing for userlibri Signed-off-by: Alexandra Antonova * some refactoring Signed-off-by: Alexandra Antonova * some refactoring Signed-off-by: Alexandra Antonova * move some functions to utils to reuse from other project Signed-off-by: Alexandra Antonova * move some functions to utils to reuse from other project Signed-off-by: Alexandra Antonova * move some functions to utils to reuse from other project Signed-off-by: Alexandra Antonova * small refactoring before pr. Add bash-scripts reproducing evaluation Signed-off-by: Alexandra Antonova * style fix Signed-off-by: Alexandra Antonova * small fixes in inference Signed-off-by: Alexandra Antonova * bug fix - didn't move window on last symbol Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bug - shuffle was before truncation of sorted candidates Signed-off-by: Alexandra Antonova * refactoring, fix some bugs Signed-off-by: Alexandra Antonova * variour fixes. Add word_indices at inference Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add candidate positions Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Move data preparation and evaluation to other repo Signed-off-by: Alexandra Antonova * add infer_reproduce_paper. Refactoring Signed-off-by: Alexandra Antonova * refactor inference using fragment indices Signed-off-by: Alexandra Antonova * add some helper functions Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bug with parameters order Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bugs Signed-off-by: Alexandra Antonova * refactoring, fix bug Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add multiple variants of adjusting start/end positions Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * more fixes Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add unit tests, other fixes Signed-off-by: Alexandra Antonova * fix Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix CodeQl warnings Signed-off-by: Alexandra Antonova * add script for full inference pipeline, refactoring Signed-off-by: Alexandra Antonova * add tutorial Signed-off-by: Alexandra Antonova * take example data from HuggingFace Signed-off-by: Alexandra Antonova * add docs Signed-off-by: Alexandra Antonova * fix comment Signed-off-by: Alexandra Antonova * fix bug Signed-off-by: Alexandra Antonova * small fixes for PR Signed-off-by: Alexandra Antonova * add some more tests Signed-off-by: Alexandra Antonova * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * try to fix tests adding with_downloads Signed-off-by: Alexandra Antonova * skip tests with tokenizer download Signed-off-by: Alexandra Antonova --------- Signed-off-by: Alexandra Antonova Signed-off-by: Alexandra Antonova Co-authored-by: Alexandra Antonova Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- docs/source/nlp/models.rst | 1 + .../nlp/spellchecking_asr_customization.rst | 128 ++ docs/source/starthere/tutorials.rst | 3 + .../spellchecking_asr_customization/README.md | 32 + .../checkpoint_to_nemo.py | 38 + ...pellchecking_asr_customization_config.yaml | 97 ++ .../convert_data_to_tarred.sh | 50 + .../create_custom_vocab_index.py | 72 + .../create_tarred_dataset.py | 99 ++ .../helpers.py | 86 + .../postprocess_and_update_manifest.py | 79 + .../prepare_input_from_manifest.py | 129 ++ .../run_infer.sh | 99 ++ .../run_training.sh | 56 + .../run_training_tarred.sh | 63 + .../spellchecking_asr_customization_infer.py | 123 ++ .../spellchecking_asr_customization_train.py | 66 + .../extract_giza_alignments.py | 215 +-- .../__init__.py | 20 + .../bert_example.py | 593 +++++++ .../dataset.py | 521 ++++++ .../spellchecking_asr_customization/utils.py | 845 ++++++++++ .../text_normalization_as_tagging/utils.py | 196 +++ nemo/collections/nlp/models/__init__.py | 1 + .../__init__.py | 18 + .../spellchecking_model.py | 526 ++++++ .../spoken_wikipedia/run.sh | 2 +- .../test_spellchecking_asr_customization.py | 1102 +++++++++++++ .../ctc_segmentation/scripts/prepare_data.py | 2 +- ...pellMapper_English_ASR_Customization.ipynb | 1403 +++++++++++++++++ .../spellmapper_customization_vocabulary.png | Bin 0 -> 39243 bytes .../images/spellmapper_data_preparation.png | Bin 0 -> 75265 bytes .../images/spellmapper_inference_pipeline.png | Bin 0 -> 146148 bytes 33 files changed, 6459 insertions(+), 206 deletions(-) create mode 100644 docs/source/nlp/spellchecking_asr_customization.rst create mode 100644 examples/nlp/spellchecking_asr_customization/README.md create mode 100644 examples/nlp/spellchecking_asr_customization/checkpoint_to_nemo.py create mode 100644 examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml create mode 100644 examples/nlp/spellchecking_asr_customization/convert_data_to_tarred.sh create mode 100644 examples/nlp/spellchecking_asr_customization/create_custom_vocab_index.py create mode 100644 examples/nlp/spellchecking_asr_customization/create_tarred_dataset.py create mode 100644 examples/nlp/spellchecking_asr_customization/helpers.py create mode 100644 examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py create mode 100644 examples/nlp/spellchecking_asr_customization/prepare_input_from_manifest.py create mode 100644 examples/nlp/spellchecking_asr_customization/run_infer.sh create mode 100644 examples/nlp/spellchecking_asr_customization/run_training.sh create mode 100644 examples/nlp/spellchecking_asr_customization/run_training_tarred.sh create mode 100644 examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py create mode 100644 examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_train.py create mode 100644 nemo/collections/nlp/data/spellchecking_asr_customization/__init__.py create mode 100644 nemo/collections/nlp/data/spellchecking_asr_customization/bert_example.py create mode 100644 nemo/collections/nlp/data/spellchecking_asr_customization/dataset.py create mode 100644 nemo/collections/nlp/data/spellchecking_asr_customization/utils.py create mode 100644 nemo/collections/nlp/models/spellchecking_asr_customization/__init__.py create mode 100644 nemo/collections/nlp/models/spellchecking_asr_customization/spellchecking_model.py create mode 100644 tests/collections/nlp/test_spellchecking_asr_customization.py create mode 100644 tutorials/nlp/SpellMapper_English_ASR_Customization.ipynb create mode 100644 tutorials/nlp/images/spellmapper_customization_vocabulary.png create mode 100644 tutorials/nlp/images/spellmapper_data_preparation.png create mode 100644 tutorials/nlp/images/spellmapper_inference_pipeline.png diff --git a/docs/source/nlp/models.rst b/docs/source/nlp/models.rst index 932be201bfb2..ad50d976db9f 100755 --- a/docs/source/nlp/models.rst +++ b/docs/source/nlp/models.rst @@ -9,6 +9,7 @@ NeMo's NLP collection supports provides the following task-specific models: :maxdepth: 1 punctuation_and_capitalization_models + spellchecking_asr_customization token_classification joint_intent_slot text_classification diff --git a/docs/source/nlp/spellchecking_asr_customization.rst b/docs/source/nlp/spellchecking_asr_customization.rst new file mode 100644 index 000000000000..f9009b520361 --- /dev/null +++ b/docs/source/nlp/spellchecking_asr_customization.rst @@ -0,0 +1,128 @@ +.. _spellchecking_asr_customization: + +SpellMapper (Spellchecking ASR Customization) Model +===================================================== + +SpellMapper is a non-autoregressive model for postprocessing of ASR output. It gets as input a single ASR hypothesis (text) and a custom vocabulary and predicts which fragments in the ASR hypothesis should be replaced by which custom words/phrases if any. Unlike traditional spellchecking approaches, which aim to correct known words using language models, SpellMapper's goal is to correct highly specific user terms, out-of-vocabulary (OOV) words or spelling variations (e.g., "John Koehn", "Jon Cohen"). + +This model is an alternative to word boosting/shallow fusion approaches: + +- does not require retraining ASR model; +- does not require beam-search/language model (LM); +- can be applied on top of any English ASR model output; + +Model Architecture +------------------ +Though SpellMapper is based on `BERT `__ :cite:`nlp-ner-devlin2018bert` architecture, it uses some non-standard tricks that make it different from other BERT-based models: + +- ten separators (``[SEP]`` tokens) are used to combine the ASR hypothesis and ten candidate phrases into a single input; +- the model works on character level; +- subword embeddings are concatenated to the embeddings of each character that belongs to this subword; + + .. code:: + + Example input: [CLS] a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o [SEP] d i d i e r _ s a u m o n [SEP] a s t r o n o m i e [SEP] t r i s t a n _ g u i l l o t [SEP] ... + Input segments: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 + Example output: 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 3 3 3 3 3 3 3 3 3 3 3 3 3 0 ... + +The model calculates logits for each character x 11 labels: + +- ``0`` - character doesn't belong to any candidate, +- ``1..10`` - character belongs to candidate with this id. + +At inference average pooling is applied to calculate replacement probability for the whole fragments. + +Quick Start Guide +----------------- + +We recommend you try this model in a Jupyter notebook (need GPU): +`NeMo/tutorials/nlp/SpellMapper_English_ASR_Customization.ipynb `__. + +A pretrained English checkpoint can be found at `HuggingFace `__. + +An example inference pipeline can be found here: `NeMo/examples/nlp/spellchecking_asr_customization/run_infer.sh `__. + +An example script on how to train the model can be found here: `NeMo/examples/nlp/spellchecking_asr_customization/run_training.sh `__. + +An example script on how to train on large datasets can be found here: `NeMo/examples/nlp/spellchecking_asr_customization/run_training_tarred.sh `__. + +The default configuration file for the model can be found here: `NeMo/examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml `__. + +.. _dataset_spellchecking_asr_customization: + +Input/Output Format at Inference stage +-------------------------------------- +Here we describe input/output format of the SpellMapper model. + +.. note:: + + If you use `inference pipeline `__ this format will be hidden inside and you only need to provide an input manifest and user vocabulary and you will get a corrected manifest. + +An input line should consist of 4 tab-separated columns: + 1. text of ASR-hypothesis + 2. texts of 10 candidates separated by semicolon + 3. 1-based ids of non-dummy candidates, separated by space + 4. approximate start/end coordinates of non-dummy candidates (correspond to ids in third column) + +Example input (in one line): + +.. code:: + + t h e _ t a r a s i c _ o o r d a _ i s _ a _ p a r t _ o f _ t h e _ a o r t a _ l o c a t e d _ i n _ t h e _ t h o r a x + h e p a t i c _ c i r r h o s i s;u r a c i l;c a r d i a c _ a r r e s t;w e a n;a p g a r;p s y c h o m o t o r;t h o r a x;t h o r a c i c _ a o r t a;a v f;b l o c k a d e d + 1 2 6 7 8 9 10 + CUSTOM 6 23;CUSTOM 4 10;CUSTOM 4 15;CUSTOM 56 62;CUSTOM 5 19;CUSTOM 28 31;CUSTOM 39 48 + +Each line in SpellMapper output is tab-separated and consists of 4 columns: + 1. ASR-hypothesis (same as in input) + 2. 10 candidates separated by semicolon (same as in input) + 3. fragment predictions, separated by semicolon, each prediction is a tuple (start, end, candidate_id, probability) + 4. letter predictions - candidate_id predicted for each letter (this is only for debug purposes) + +Example output (in one line): + +.. code:: + + t h e _ t a r a s i c _ o o r d a _ i s _ a _ p a r t _ o f _ t h e _ a o r t a _ l o c a t e d _ i n _ t h e _ t h o r a x + h e p a t i c _ c i r r h o s i s;u r a c i l;c a r d i a c _ a r r e s t;w e a n;a p g a r;p s y c h o m o t o r;t h o r a x;t h o r a c i c _ a o r t a;a v f;b l o c k a d e d + 56 62 7 0.99998;4 20 8 0.95181;12 20 8 0.44829;4 17 8 0.99464;12 17 8 0.97645 + 8 8 8 0 8 8 8 8 8 8 8 8 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 7 7 7 7 7 + +Training Data Format +-------------------- + +For training, the data should consist of 5 files: + +- ``config.json`` - BERT config +- ``label_map.txt`` - labels from 0 to 10, do not change +- ``semiotic_classes.txt`` - currently there are only two classes: ``PLAIN`` and ``CUSTOM``, do not change +- ``train.tsv`` - training examples +- ``test.tsv`` - validation examples + +Note that since all these examples are synthetic, we do not reserve a set for final testing. Instead, we run `inference pipeline `__ and compare resulting word error rate (WER) to the WER of baseline ASR output. + +One (non-tarred) training example should consist of 4 tab-separated columns: + 1. text of ASR-hypothesis + 2. texts of 10 candidates separated by semicolon + 3. 1-based ids of correct candidates, separated by space, or 0 if none + 4. start/end coordinates of correct candidates (correspond to ids in third column) + +Example (in one line): + +.. code:: + + a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o + d i d i e r _ s a u m o n;a s t r o n o m i e;t r i s t a n _ g u i l l o t;t r i s t e s s e;m o n a d e;c h r i s t i a n;a s t r o n o m e r;s o l o m o n;d i d i d i d i d i;m e r c y + 1 3 + CUSTOM 12 23;CUSTOM 28 41 + +For data preparation see `this script `__ + + +References +---------- + +.. bibliography:: nlp_all.bib + :style: plain + :labelprefix: NLP-NER + :keyprefix: nlp-ner- diff --git a/docs/source/starthere/tutorials.rst b/docs/source/starthere/tutorials.rst index cb81aecc1109..9c960053398b 100644 --- a/docs/source/starthere/tutorials.rst +++ b/docs/source/starthere/tutorials.rst @@ -130,6 +130,9 @@ To run a tutorial: * - NLP - Punctuation and Capitalization - `Punctuation and Capitalization `_ + * - NLP + - Spellchecking ASR Customization - SpellMapper + - `Spellchecking ASR Customization - SpellMapper `_ * - NLP - Entity Linking - `Entity Linking `_ diff --git a/examples/nlp/spellchecking_asr_customization/README.md b/examples/nlp/spellchecking_asr_customization/README.md new file mode 100644 index 000000000000..2d83fd8d11ad --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/README.md @@ -0,0 +1,32 @@ +# SpellMapper - spellchecking model for ASR Customization + +This model is inspired by Microsoft's paper https://arxiv.org/pdf/2203.00888.pdf, but does not repeat its implementation. +The goal is to build a model that gets as input a single ASR hypothesis (text) and a vocabulary of custom words/phrases and predicts which fragments in the ASR hypothesis should be replaced by which custom words/phrases if any. +Our model is non-autoregressive (NAR) based on transformer architecture (BERT with multiple separators). + +As initial data we use about 5 mln entities from [YAGO corpus](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/). These entities are short phrases from Wikipedia headings. +In order to get misspelled predictions we feed these data to TTS model and then to ASR model. +Having a "parallel" corpus of "correct + misspelled" phrases, we use statistical machine translation techniques to create a dictionary of possible ngram mappings with their respective frequencies. +We create an auxiliary algorithm that takes as input a sentence (ASR hypothesis) and a large custom dictionary (e.g. 5000 phrases) and selects top 10 candidate phrases that are probably contained in this sentence in a misspelled way. +The task of our final neural model is to predict which fragments in the ASR hypothesis should be replaced by which of top-10 candidate phrases if any. + +The pipeline consists of multiple steps: + +1. Download or generate training data. + See `https://github.com/bene-ges/nemo_compatible/tree/main/scripts/nlp/en_spellmapper/dataset_preparation` + +2. [Optional] Convert training dataset to tarred files. + `convert_dataset_to_tarred.sh` + +3. Train spellchecking model. + `run_training.sh` + or + `run_training_tarred.sh` + +4. Run evaluation. + - [test_on_kensho.sh](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/evaluation/test_on_kensho.sh) + - [test_on_userlibri.sh](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/evaluation/test_on_kensho.sh) + - [test_on_spoken_wikipedia.sh](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/evaluation/test_on_kensho.sh) + +5. Run inference. + `python run_infer.sh` diff --git a/examples/nlp/spellchecking_asr_customization/checkpoint_to_nemo.py b/examples/nlp/spellchecking_asr_customization/checkpoint_to_nemo.py new file mode 100644 index 000000000000..c2f514f3e67e --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/checkpoint_to_nemo.py @@ -0,0 +1,38 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script converts checkpoint .ckpt to .nemo file. + +This script uses the `examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml` +config file by default. The other option is to set another config file via command +line arguments by `--config-name=CONFIG_FILE_PATH'. +""" + +from omegaconf import DictConfig, OmegaConf + +from nemo.collections.nlp.models import SpellcheckingAsrCustomizationModel +from nemo.core.config import hydra_runner +from nemo.utils import logging + + +@hydra_runner(config_path="conf", config_name="spellchecking_asr_customization_config") +def main(cfg: DictConfig) -> None: + logging.debug(f'Config Params: {OmegaConf.to_yaml(cfg)}') + SpellcheckingAsrCustomizationModel.load_from_checkpoint(cfg.checkpoint_path).save_to(cfg.target_nemo_path) + + +if __name__ == "__main__": + main() diff --git a/examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml b/examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml new file mode 100644 index 000000000000..c98915cdfc6f --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml @@ -0,0 +1,97 @@ +name: &name spellchecking +lang: ??? # e.g. 'ru', 'en' + +# Pretrained Nemo Models +pretrained_model: null + +trainer: + devices: 1 # the number of gpus, 0 for CPU + num_nodes: 1 + max_epochs: 3 # the number of training epochs + enable_checkpointing: false # provided by exp_manager + logger: false # provided by exp_manager + accumulate_grad_batches: 1 # accumulates grads every k batches + gradient_clip_val: 0.0 + precision: 32 # Should be set to 16 for O1 and O2 to enable the AMP. + accelerator: gpu + strategy: ddp + log_every_n_steps: 1 # Interval of logging. + val_check_interval: 1.0 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations + resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc. + +model: + do_training: true + label_map: ??? # path/.../label_map.txt + semiotic_classes: ??? # path/.../semiotic_classes.txt + max_sequence_len: 128 + lang: ${lang} + hidden_size: 768 + + optim: + name: adamw + lr: 3e-5 + weight_decay: 0.1 + + sched: + name: WarmupAnnealing + + # pytorch lightning args + monitor: val_loss + reduce_on_plateau: false + + # scheduler config override + warmup_ratio: 0.1 + last_epoch: -1 + + language_model: + pretrained_model_name: bert-base-uncased # For ru, try DeepPavlov/rubert-base-cased | For de or multilingual, try bert-base-multilingual-cased + lm_checkpoint: null + config_file: null # json file, precedence over config + config: null + + tokenizer: + tokenizer_name: ${model.language_model.pretrained_model_name} # or sentencepiece + vocab_file: null # path to vocab file + tokenizer_model: null # only used if tokenizer is sentencepiece + special_tokens: null + +exp_manager: + exp_dir: nemo_experiments # where to store logs and checkpoints + name: training # name of experiment + create_tensorboard_logger: True + create_checkpoint_callback: True + checkpoint_callback_params: + save_top_k: 3 + monitor: "val_loss" + mode: "min" + +tokenizer: + tokenizer_name: ${model.transformer} # or sentencepiece + vocab_file: null # path to vocab file + tokenizer_model: null # only used if tokenizer is sentencepiece + special_tokens: null + +# Data +data: + train_ds: + data_path: ??? # provide the full path to the file + batch_size: 8 + shuffle: true + num_workers: 3 + pin_memory: false + drop_last: false + + validation_ds: + data_path: ??? # provide the full path to the file. + batch_size: 8 + shuffle: false + num_workers: 3 + pin_memory: false + drop_last: false + + +# Inference +inference: + from_file: null # Path to the raw text, no labels required. Each sentence on a separate line + out_file: null # Path to the output file + batch_size: 16 # batch size for inference.from_file diff --git a/examples/nlp/spellchecking_asr_customization/convert_data_to_tarred.sh b/examples/nlp/spellchecking_asr_customization/convert_data_to_tarred.sh new file mode 100644 index 000000000000..d4265eb4beb6 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/convert_data_to_tarred.sh @@ -0,0 +1,50 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +# Path to NeMo repository +NEMO_PATH=NeMo + +DATA_PATH="data_folder" + +## data_folder_example +## ├── tarred_data +## | └── (output) +## ├── config.json +##   ├── label_map.txt +##   ├── semiotic_classes.txt +## ├── test.tsv +## ├── 1.tsv +## ├── ... +## └── 200.tsv + +## Each of {1-200}.tsv input files are 110'000 examples subsets of all.tsv (except for validation part), +## generated by https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/dataset_preparation/build_training_data.sh +## Note that in this example we use 110'000 as input and only pack 100'000 of them to tar file. +## This is because some input examples, e.g. too long, can be skipped during preprocessing, and we want all tar files to contain fixed equal number of examples. + +for part in {1..200} +do + python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/create_tarred_dataset.py \ + lang="en" \ + data.train_ds.data_path=${DATA_PATH}/${part}.tsv \ + data.validation_ds.data_path=${DATA_PATH}/test.tsv \ + model.max_sequence_len=256 \ + model.language_model.pretrained_model_name=huawei-noah/TinyBERT_General_6L_768D \ + model.language_model.config_file=${DATA_PATH}/config.json \ + model.label_map=${DATA_PATH}/label_map.txt \ + model.semiotic_classes=${DATA_PATH}/semiotic_classes.txt \ + +output_tar_file=${DATA_PATH}/tarred_data/part${part}.tar \ + +take_first_n_lines=100000 +done diff --git a/examples/nlp/spellchecking_asr_customization/create_custom_vocab_index.py b/examples/nlp/spellchecking_asr_customization/create_custom_vocab_index.py new file mode 100644 index 000000000000..07d64ec5b723 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/create_custom_vocab_index.py @@ -0,0 +1,72 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script is used to create an index of custom vocabulary and save it to file. +See "examples/nlp/spellchecking_asr_customization/run_infer.sh" for the whole inference pipeline. +""" + +from argparse import ArgumentParser + +from nemo.collections.nlp.data.spellchecking_asr_customization.utils import get_index, load_ngram_mappings + +parser = ArgumentParser(description="Create an index of custom vocabulary and save it to file") + +parser.add_argument( + "--input_name", required=True, type=str, help="Path to input file with custom vocabulary (plain text)" +) +parser.add_argument( + "--ngram_mappings", required=True, type=str, help="Path to input file with n-gram mapping vocabulary" +) +parser.add_argument("--output_name", required=True, type=str, help="Path to output file with custom vocabulary index") +parser.add_argument("--min_log_prob", default=-4.0, type=float, help="Threshold on log probability") +parser.add_argument( + "--max_phrases_per_ngram", + default=500, + type=int, + help="Threshold on number of phrases that can be stored for one n-gram key in index. Keys with more phrases are discarded.", +) +parser.add_argument( + "--max_misspelled_freq", default=125000, type=int, help="Threshold on maximum frequency of misspelled n-gram" +) + +args = parser.parse_args() + +# Load custom vocabulary +custom_phrases = set() +with open(args.input_name, "r", encoding="utf-8") as f: + for line in f: + phrase = line.strip() + custom_phrases.add(" ".join(list(phrase.replace(" ", "_")))) +print("Size of customization vocabulary:", len(custom_phrases)) + +# Load n-gram mappings vocabulary +ngram_mapping_vocab, ban_ngram = load_ngram_mappings(args.ngram_mappings, max_misspelled_freq=125000) + +# Generate index of custom phrases +phrases, ngram2phrases = get_index( + custom_phrases, + ngram_mapping_vocab, + ban_ngram, + min_log_prob=args.min_log_prob, + max_phrases_per_ngram=args.max_phrases_per_ngram, +) + +# Save index to file +with open(args.output_name, "w", encoding="utf-8") as out: + for ngram in ngram2phrases: + for phrase_id, begin, size, logprob in ngram2phrases[ngram]: + phrase = phrases[phrase_id] + out.write(ngram + "\t" + phrase + "\t" + str(begin) + "\t" + str(size) + "\t" + str(logprob) + "\n") diff --git a/examples/nlp/spellchecking_asr_customization/create_tarred_dataset.py b/examples/nlp/spellchecking_asr_customization/create_tarred_dataset.py new file mode 100644 index 000000000000..d0bdc2c9bd30 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/create_tarred_dataset.py @@ -0,0 +1,99 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script is used to create a tarred dataset for SpellcheckingAsrCustomizationModel. + +This script uses the `/examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml` +config file by default. The other option is to set another config file via command +line arguments by `--config-name=CONFIG_FILE_PATH'. Probably it is worth looking +at the example config file to see the list of parameters used for training. + +USAGE Example: +1. Obtain a processed dataset +2. Run: + python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/create_tarred_dataset.py \ + lang=${LANG} \ + data.train_ds.data_path=${DATA_PATH}/train.tsv \ + model.language_model.pretrained_model_name=${LANGUAGE_MODEL} \ + model.label_map=${DATA_PATH}/label_map.txt \ + +output_tar_file=tarred/part1.tar \ + +take_first_n_lines=100000 + +""" +import pickle +import tarfile +from io import BytesIO + +from helpers import MODEL, instantiate_model_and_trainer +from omegaconf import DictConfig, OmegaConf + +from nemo.core.config import hydra_runner +from nemo.utils import logging + + +@hydra_runner(config_path="conf", config_name="spellchecking_asr_customization_config") +def main(cfg: DictConfig) -> None: + logging.info(f'Config Params: {OmegaConf.to_yaml(cfg)}') + logging.info("Start creating tar file from " + cfg.data.train_ds.data_path + " ...") + _, model = instantiate_model_and_trainer( + cfg, MODEL, True + ) # instantiate model like for training because we may not have pretrained model + dataset = model._train_dl.dataset + archive = tarfile.open(cfg.output_tar_file, mode="w") + max_lines = int(cfg.take_first_n_lines) + for i in range(len(dataset)): + if i >= max_lines: + logging.info("Reached " + str(max_lines) + " examples") + break + ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + spans, + ) = dataset[i] + + # do not store masks as they are just arrays of 1 + content = { + "input_ids": input_ids, + "input_mask": input_mask, + "segment_ids": segment_ids, + "input_ids_for_subwords": input_ids_for_subwords, + "input_mask_for_subwords": input_mask_for_subwords, + "segment_ids_for_subwords": segment_ids_for_subwords, + "character_pos_to_subword_pos": character_pos_to_subword_pos, + "labels_mask": labels_mask, + "labels": labels, + "spans": spans, + } + b = BytesIO() + pickle.dump(content, b) + b.seek(0) + tarinfo = tarfile.TarInfo(name="example_" + str(i) + ".pkl") + tarinfo.size = b.getbuffer().nbytes + archive.addfile(tarinfo=tarinfo, fileobj=b) + + archive.close() + logging.info("Tar file " + cfg.output_tar_file + " created!") + + +if __name__ == '__main__': + main() diff --git a/examples/nlp/spellchecking_asr_customization/helpers.py b/examples/nlp/spellchecking_asr_customization/helpers.py new file mode 100644 index 000000000000..2db11b0e7d96 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/helpers.py @@ -0,0 +1,86 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import os +from typing import Tuple + +import pytorch_lightning as pl +from omegaconf import DictConfig + +from nemo.collections.nlp.models import SpellcheckingAsrCustomizationModel +from nemo.collections.nlp.parts.nlp_overrides import NLPSaveRestoreConnector +from nemo.utils import logging + +__all__ = ["MODEL", "MODEL_NAMES", "instantiate_model_and_trainer"] + +MODEL = "spellchecking" +MODEL_NAMES = [MODEL] + + +def instantiate_model_and_trainer( + cfg: DictConfig, model_name: str, do_training: bool +) -> Tuple[pl.Trainer, SpellcheckingAsrCustomizationModel]: + """ Function for instantiating a model and a trainer + Args: + cfg: The config used to instantiate the model and the trainer. + model_name: A str indicates the model direction, currently only 'itn'. + do_training: A boolean flag indicates whether the model will be trained or evaluated. + + Returns: + trainer: A PyTorch Lightning trainer + model: A SpellcheckingAsrCustomizationModel + """ + + if model_name not in MODEL_NAMES: + raise ValueError(f"{model_name} is unknown model type") + + # Get configs for the corresponding models + trainer_cfg = cfg.get("trainer") + model_cfg = cfg.get("model") + pretrained_cfg = cfg.get("pretrained_model", None) + trainer = pl.Trainer(**trainer_cfg) + if not pretrained_cfg: + logging.info(f"Initializing {model_name} model") + if model_name == MODEL: + model = SpellcheckingAsrCustomizationModel(model_cfg, trainer=trainer) + else: + raise ValueError(f"{model_name} is unknown model type") + elif os.path.exists(pretrained_cfg): + logging.info(f"Restoring pretrained {model_name} model from {pretrained_cfg}") + save_restore_connector = NLPSaveRestoreConnector() + model = SpellcheckingAsrCustomizationModel.restore_from( + pretrained_cfg, save_restore_connector=save_restore_connector + ) + else: + logging.info(f"Loading pretrained model {pretrained_cfg}") + if model_name == MODEL: + if pretrained_cfg not in SpellcheckingAsrCustomizationModel.get_available_model_names(): + raise ( + ValueError( + f"{pretrained_cfg} not in the list of available Tagger models." + f"Select from {SpellcheckingAsrCustomizationModel.list_available_models()}" + ) + ) + model = SpellcheckingAsrCustomizationModel.from_pretrained(pretrained_cfg) + else: + raise ValueError(f"{model_name} is unknown model type") + + # Setup train and validation data + if do_training: + model.setup_training_data(train_data_config=cfg.data.train_ds) + model.setup_validation_data(val_data_config=cfg.data.validation_ds) + + logging.info(f"Model {model_name} -- Device {model.device}") + return trainer, model diff --git a/examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py b/examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py new file mode 100644 index 000000000000..871d5e5c0c0c --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py @@ -0,0 +1,79 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script is used to postprocess SpellMapper results and generate an updated nemo ASR manifest. +See "examples/nlp/spellchecking_asr_customization/run_infer.sh" for the whole inference pipeline. +""" + +from argparse import ArgumentParser + +from nemo.collections.nlp.data.spellchecking_asr_customization.utils import ( + update_manifest_with_spellmapper_corrections, +) + +parser = ArgumentParser(description="Postprocess SpellMapper results and generate an updated nemo ASR manifest") + +parser.add_argument("--input_manifest", required=True, type=str, help="Path to input nemo ASR manifest") +parser.add_argument( + "--field_name", default="pred_text", type=str, help="Name of json field with original ASR hypothesis text" +) +parser.add_argument( + "--short2full_name", + required=True, + type=str, + help="Path to input file with correspondence between sentence fragments and full sentences", +) +parser.add_argument( + "--spellmapper_results", required=True, type=str, help="Path to input file with SpellMapper inference results" +) +parser.add_argument("--output_manifest", required=True, type=str, help="Path to output nemo ASR manifest") +parser.add_argument("--min_prob", default=0.5, type=float, help="Threshold on replacement probability") +parser.add_argument( + "--use_dp", + action="store_true", + help="Whether to use additional replacement filtering by using dynamic programming", +) +parser.add_argument( + "--replace_hyphen_to_space", + action="store_true", + help="Whether to use space instead of hyphen in replaced fragments", +) +parser.add_argument( + "--ngram_mappings", type=str, required=True, help="File with ngram mappings, only needed if use_dp=true" +) +parser.add_argument( + "--min_dp_score_per_symbol", + default=-1.5, + type=float, + help="Minimum dynamic programming sum score averaged by hypothesis length", +) + +args = parser.parse_args() + +update_manifest_with_spellmapper_corrections( + input_manifest_name=args.input_manifest, + short2full_name=args.short2full_name, + output_manifest_name=args.output_manifest, + spellmapper_results_name=args.spellmapper_results, + min_prob=args.min_prob, + replace_hyphen_to_space=args.replace_hyphen_to_space, + field_name=args.field_name, + use_dp=args.use_dp, + ngram_mappings=args.ngram_mappings, + min_dp_score_per_symbol=args.min_dp_score_per_symbol, +) + +print("Resulting manifest saved to: ", args.output_manifest) diff --git a/examples/nlp/spellchecking_asr_customization/prepare_input_from_manifest.py b/examples/nlp/spellchecking_asr_customization/prepare_input_from_manifest.py new file mode 100644 index 000000000000..6fd5e524390a --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/prepare_input_from_manifest.py @@ -0,0 +1,129 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script contains an example on how to prepare input for SpellMapper inference from a nemo ASR manifest. +It splits sentences to shorter fragments, runs candidate retrieval and generates input in the required format. +It produces two output files: + 1. File with correspondence between sentence fragments and full sentences. + 2. File that will serve as input for SpellMapper inference. + +See "examples/nlp/spellchecking_asr_customization/run_infer.sh" for the whole inference pipeline. +""" + +from argparse import ArgumentParser + +from nemo.collections.nlp.data.spellchecking_asr_customization.utils import ( + extract_and_split_text_from_manifest, + get_candidates, + load_index, +) + +parser = ArgumentParser(description="Prepare input for SpellMapper inference from a nemo ASR manifest") +parser.add_argument("--manifest", required=True, type=str, help="Path to input manifest file") +parser.add_argument( + "--custom_vocab_index", required=True, type=str, help="Path to input file with custom vocabulary index" +) +parser.add_argument( + "--big_sample", + required=True, + type=str, + help="Path to input file with big sample of phrases to sample dummy candidates if there less than 10 are found by retrieval", +) +parser.add_argument( + "--short2full_name", + required=True, + type=str, + help="Path to output file with correspondence between sentence fragments and full sentences", +) +parser.add_argument( + "--output_name", + required=True, + type=str, + help="Path to output file that will serve as input for SpellMapper inference", +) +parser.add_argument("--field_name", default="pred_text", type=str, help="Name of json field with ASR hypothesis text") +parser.add_argument("--len_in_words", default=16, type=int, help="Maximum fragment length in words") +parser.add_argument( + "--step_in_words", + default=8, + type=int, + help="Step in words for moving to next fragment. If less than len_in_words, fragments will intersect", +) + +args = parser.parse_args() + +# Split ASR hypotheses to shorter fragments, because SpellMapper can't handle arbitrarily long sequences. +# The correspondence between short and original fragments is saved to a file and will be used at post-processing. +extract_and_split_text_from_manifest( + input_name=args.manifest, + output_name=args.short2full_name, + field_name=args.field_name, + len_in_words=args.len_in_words, + step_in_words=args.step_in_words, +) + +# Load index of custom vocabulary from file +phrases, ngram2phrases = load_index(args.custom_vocab_index) + +# Load big sample of phrases to sample dummy candidates if there less than 10 are found by retrieval +big_sample_of_phrases = set() +with open(args.big_sample, "r", encoding="utf-8") as f: + for line in f: + phrase, freq = line.strip().split("\t") + if int(freq) > 50: # do not want to use frequent phrases as dummy candidates + continue + if len(phrase) < 6 or len(phrase) > 15: # do not want to use too short or too long phrases as dummy candidates + continue + big_sample_of_phrases.add(phrase) + +big_sample_of_phrases = list(big_sample_of_phrases) + +# Generate input for SpellMapper inference +out = open(args.output_name, "w", encoding="utf-8") +with open(args.short2full_name, "r", encoding="utf-8") as f: + for line in f: + short_sent, _ = line.strip().split("\t") + sent = "_".join(short_sent.split()) + letters = list(sent) + candidates = get_candidates(ngram2phrases, phrases, letters, big_sample_of_phrases) + if len(candidates) == 0: + continue + if len(candidates) != 10: + raise ValueError("expect 10 candidates, got: ", len(candidates)) + + # We add two columns with targets and span_info. + # They have same format as during training, but start and end positions are APPROXIMATE, they will be adjusted when constructing BertExample. + targets = [] + span_info = [] + for idx, c in enumerate(candidates): + if c[1] == -1: + continue + targets.append(str(idx + 1)) # targets are 1-based + start = c[1] + # ensure that end is not outside sentence length (it can happen because c[2] is candidate length used as approximation) + end = min(c[1] + c[2], len(letters)) + span_info.append("CUSTOM " + str(start) + " " + str(end)) + out.write( + " ".join(letters) + + "\t" + + ";".join([x[0] for x in candidates]) + + "\t" + + " ".join(targets) + + "\t" + + ";".join(span_info) + + "\n" + ) +out.close() diff --git a/examples/nlp/spellchecking_asr_customization/run_infer.sh b/examples/nlp/spellchecking_asr_customization/run_infer.sh new file mode 100644 index 000000000000..09da98171c16 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/run_infer.sh @@ -0,0 +1,99 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +## RUN INFERENCE ON NEMO MANIFEST AND CUSTOM VOCABULARY + +## Path to NeMo repository +NEMO_PATH=NeMo + +## Download model repo from Hugging Face (if clone doesn't work, run "git lfs install" and try again) +git clone https://huggingface.co/bene-ges/spellmapper_asr_customization_en +## Download repo with test data +git clone https://huggingface.co/datasets/bene-ges/spellmapper_en_evaluation + +## Files in model repo +PRETRAINED_MODEL=spellmapper_asr_customization_en/training_10m_5ep.nemo +NGRAM_MAPPINGS=spellmapper_asr_customization_en/replacement_vocab_filt.txt +BIG_SAMPLE=spellmapper_asr_customization_en/big_sample.txt + +## Override these two files if you want to test on your own data +## File with input nemo ASR manifest +INPUT_MANIFEST=spellmapper_en_evaluation/medical_manifest_ctc.json +## File containing custom words and phrases (plain text) +CUSTOM_VOCAB=spellmapper_en_evaluation/medical_custom_vocab.json + +## Other files will be created +## File with index of custom vocabulary +INDEX="index.txt" +## File with short fragments and corresponding original sentences +SHORT2FULL="short2full.txt" +## File with input for SpellMapper inference +SPELLMAPPER_INPUT="spellmapper_input.txt" +## File with output of SpellMapper inference +SPELLMAPPER_OUTPUT="spellmapper_output.txt" +## File with output nemo ASR manifest +OUTPUT_MANIFEST="out_manifest.json" + + +# Create index of custom vocabulary +python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/create_custom_vocab_index.py \ + --input_name ${CUSTOM_VOCAB} \ + --ngram_mappings ${NGRAM_MAPPINGS} \ + --output_name ${INDEX} \ + --min_log_prob -4.0 \ + --max_phrases_per_ngram 600 + +# Prepare input for SpellMapper inference +python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/prepare_input_from_manifest.py \ + --manifest ${INPUT_MANIFEST} \ + --custom_vocab_index ${INDEX} \ + --big_sample ${BIG_SAMPLE} \ + --short2full_name ${SHORT2FULL} \ + --output_name ${SPELLMAPPER_INPUT} \ + --field_name "pred_text" \ + --len_in_words 16 \ + --step_in_words 8 + +# Run SpellMapper inference +python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py \ + pretrained_model=${PRETRAINED_MODEL} \ + model.max_sequence_len=512 \ + inference.from_file=${SPELLMAPPER_INPUT} \ + inference.out_file=${SPELLMAPPER_OUTPUT} \ + inference.batch_size=16 \ + lang=en + +# Postprocess and create output corrected manifest +python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py \ + --input_manifest ${INPUT_MANIFEST} \ + --short2full_name ${SHORT2FULL} \ + --output_manifest ${OUTPUT_MANIFEST} \ + --spellmapper_result ${SPELLMAPPER_OUTPUT} \ + --replace_hyphen_to_space \ + --field_name "pred_text" \ + --use_dp \ + --ngram_mappings ${NGRAM_MAPPINGS} \ + --min_dp_score_per_symbol -1.5 + +# Check WER of initial manifest +python ${NEMO_PATH}/examples/asr/speech_to_text_eval.py \ + dataset_manifest=${INPUT_MANIFEST} \ + use_cer=False \ + only_score_manifest=True + +# Check WER of corrected manifest +python ${NEMO_PATH}/examples/asr/speech_to_text_eval.py \ + dataset_manifest=${OUTPUT_MANIFEST} \ + use_cer=False \ + only_score_manifest=True diff --git a/examples/nlp/spellchecking_asr_customization/run_training.sh b/examples/nlp/spellchecking_asr_customization/run_training.sh new file mode 100644 index 000000000000..85dddbb2a038 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/run_training.sh @@ -0,0 +1,56 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +## TRAIN WITH NON-TARRED DATA + +# Path to NeMo repository +NEMO_PATH=NeMo + +## Download repo with training data (very small example) +## If clone doesn't work, run "git lfs install" and try again +git clone https://huggingface.co/datasets/bene-ges/spellmapper_en_train_micro + +DATA_PATH=spellmapper_en_train_micro + +## Example of all files needed to run training with non-tarred data: +## spellmapper_en_train_micro +## ├── config.json +##   ├── label_map.txt +##   ├── semiotic_classes.txt +## ├── test.tsv +## └── train.tsv + +## To generate files config.json, label_map.txt, semiotic_classes.txt - run generate_configs.sh +## Files "train.tsv" and "test.tsv" contain training examples. +## For data preparation see https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/dataset_preparation/build_training_data.sh + +## Note that training with non-tarred data only works on single gpu. It makes sense if you use 1-2 million examples or less. + +python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_train.py \ + lang="en" \ + data.validation_ds.data_path=${DATA_PATH}/test.tsv \ + data.train_ds.data_path=${DATA_PATH}/train.tsv \ + data.train_ds.batch_size=32 \ + data.train_ds.num_workers=8 \ + model.max_sequence_len=512 \ + model.language_model.pretrained_model_name=huawei-noah/TinyBERT_General_6L_768D \ + model.language_model.config_file=${DATA_PATH}/config.json \ + model.label_map=${DATA_PATH}/label_map.txt \ + model.semiotic_classes=${DATA_PATH}/semiotic_classes.txt \ + model.optim.lr=3e-5 \ + trainer.devices=[1] \ + trainer.num_nodes=1 \ + trainer.accelerator=gpu \ + trainer.strategy=ddp \ + trainer.max_epochs=5 diff --git a/examples/nlp/spellchecking_asr_customization/run_training_tarred.sh b/examples/nlp/spellchecking_asr_customization/run_training_tarred.sh new file mode 100644 index 000000000000..655c3e23e610 --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/run_training_tarred.sh @@ -0,0 +1,63 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +## TRAIN WITH TARRED DATA + +# Path to NeMo repository +NEMO_PATH=NeMo + +DATA_PATH=data_folder + +## data_folder_example +## ├── train_tarred +## | ├── part1.tar +## | ├── ... +## | └── part200.tar +## ├── config.json +##   ├── label_map.txt +##   ├── semiotic_classes.txt +## └── test.tsv +## To generate files config.json, label_map.txt, semiotic_classes.txt, run generate_configs.sh +## To prepare data, see ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/dataset_preparation/build_training_data.sh +## To convert data to tarred format, split all.tsv to pieces of 110'000 examples (except for validation part) and use ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/dataset_preparation/convert_data_to_tarred.sh +## To run training with tarred data, use ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/run_training_tarred.sh + +## ATTENTION: How to calculate model.optim.sched.max_steps: +## Suppose, you have 2'000'000 training examples, and want to train for 5 epochs on 4 gpus with batch size 32. +## 5 (epochs) * 32 (bs) * 4 (gpus) +## 1 step consumes 128 examples (32(bs) * 4(gpus)) +## 1 epoch makes 2000000/128=15625 steps (updates) +## 5 epochs make 5*15625=78125 steps + +python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_train.py \ + lang="en" \ + data.validation_ds.data_path=${DATA_PATH}/test.tsv \ + data.train_ds.data_path=${DATA_PATH}/train_tarred/part_OP_1..100_CL_.tar \ + data.train_ds.batch_size=32 \ + data.train_ds.num_workers=16 \ + +data.train_ds.use_tarred_dataset=true \ + data.train_ds.shuffle=false \ + data.validation_ds.batch_size=16 \ + model.max_sequence_len=512 \ + model.language_model.pretrained_model_name=huawei-noah/TinyBERT_General_6L_768D \ + model.language_model.config_file=${DATA_PATH}/config.json \ + model.label_map=${DATA_PATH}/label_map.txt \ + model.semiotic_classes=${DATA_PATH}/semiotic_classes.txt \ + model.optim.sched.name=CosineAnnealing \ + +model.optim.sched.max_steps=195313 \ + trainer.devices=8 \ + trainer.num_nodes=1 \ + trainer.accelerator=gpu \ + trainer.strategy=ddp \ + trainer.max_epochs=5 diff --git a/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py b/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py new file mode 100644 index 000000000000..593264f14a5d --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py @@ -0,0 +1,123 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script contains an example on how to run inference with the SpellcheckingAsrCustomizationModel. + +An input line should consist of 4 tab-separated columns: + 1. text of ASR-hypothesis + 2. texts of 10 candidates separated by semicolon + 3. 1-based ids of non-dummy candidates + 4. approximate start/end coordinates of non-dummy candidates (correspond to ids in third column) + +Example input (in one line): + t h e _ t a r a s i c _ o o r d a _ i s _ a _ p a r t _ o f _ t h e _ a o r t a _ l o c a t e d _ i n _ t h e _ t h o r a x + h e p a t i c _ c i r r h o s i s;u r a c i l;c a r d i a c _ a r r e s t;w e a n;a p g a r;p s y c h o m o t o r;t h o r a x;t h o r a c i c _ a o r t a;a v f;b l o c k a d e d + 1 2 6 7 8 9 10 + CUSTOM 6 23;CUSTOM 4 10;CUSTOM 4 15;CUSTOM 56 62;CUSTOM 5 19;CUSTOM 28 31;CUSTOM 39 48 + +Each line in SpellMapper output is tab-separated and consists of 4 columns: + 1. ASR-hypothesis (same as in input) + 2. 10 candidates separated with semicolon (same as in input) + 3. fragment predictions, separated with semicolon, each prediction is a tuple (start, end, candidate_id, probability) + 4. letter predictions - candidate_id predicted for each letter (this is only for debug purposes) + +Example output (in one line): + t h e _ t a r a s i c _ o o r d a _ i s _ a _ p a r t _ o f _ t h e _ a o r t a _ l o c a t e d _ i n _ t h e _ t h o r a x + h e p a t i c _ c i r r h o s i s;u r a c i l;c a r d i a c _ a r r e s t;w e a n;a p g a r;p s y c h o m o t o r;t h o r a x;t h o r a c i c _ a o r t a;a v f;b l o c k a d e d + 56 62 7 0.99998;4 20 8 0.95181;12 20 8 0.44829;4 17 8 0.99464;12 17 8 0.97645 + 8 8 8 0 8 8 8 8 8 8 8 8 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 7 7 7 7 7 + + +USAGE Example: +1. Train a model, or use a pretrained checkpoint. +2. Run on a single file: + python nemo/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py \ + pretrained_model=${PRETRAINED_NEMO_CHECKPOINT} \ + model.max_sequence_len=512 \ + inference.from_file=input.txt \ + inference.out_file=output.txt \ + inference.batch_size=16 \ + lang=en +or on multiple files: + python ${NEMO_PATH}/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py \ + pretrained_model=${PRETRAINED_NEMO_CHECKPOINT} \ + model.max_sequence_len=512 \ + +inference.from_filelist=filelist.txt \ + +inference.output_folder=output_folder \ + inference.batch_size=16 \ + lang=en + +This script uses the `/examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml` +config file by default. The other option is to set another config file via command +line arguments by `--config-name=CONFIG_FILE_PATH'. +""" + + +import os + +from helpers import MODEL, instantiate_model_and_trainer +from omegaconf import DictConfig, OmegaConf + +from nemo.core.config import hydra_runner +from nemo.utils import logging + + +@hydra_runner(config_path="conf", config_name="spellchecking_asr_customization_config") +def main(cfg: DictConfig) -> None: + logging.debug(f'Config Params: {OmegaConf.to_yaml(cfg)}') + + if cfg.pretrained_model is None: + raise ValueError("A pre-trained model should be provided.") + _, model = instantiate_model_and_trainer(cfg, MODEL, False) + + if cfg.model.max_sequence_len != model.max_sequence_len: + model.max_sequence_len = cfg.model.max_sequence_len + model.builder._max_seq_length = cfg.model.max_sequence_len + input_filenames = [] + output_filenames = [] + + if "from_filelist" in cfg.inference and "output_folder" in cfg.inference: + filelist_file = cfg.inference.from_filelist + output_folder = cfg.inference.output_folder + with open(filelist_file, "r", encoding="utf-8") as f: + for line in f: + path = line.strip() + input_filenames.append(path) + folder, name = os.path.split(path) + output_filenames.append(os.path.join(output_folder, name)) + else: + text_file = cfg.inference.from_file + logging.info(f"Running inference on {text_file}...") + if not os.path.exists(text_file): + raise ValueError(f"{text_file} not found.") + input_filenames.append(text_file) + output_filenames.append(cfg.inference.out_file) + + dataloader_cfg = { + "batch_size": cfg.inference.get("batch_size", 8), + "num_workers": cfg.inference.get("num_workers", 4), + "pin_memory": cfg.inference.get("num_workers", False), + } + for input_filename, output_filename in zip(input_filenames, output_filenames): + if not os.path.exists(input_filename): + logging.info(f"Skip non-existing {input_filename}.") + continue + model.infer(dataloader_cfg, input_filename, output_filename) + logging.info(f"Predictions saved to {output_filename}.") + + +if __name__ == "__main__": + main() diff --git a/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_train.py b/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_train.py new file mode 100644 index 000000000000..7ea9314d196d --- /dev/null +++ b/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_train.py @@ -0,0 +1,66 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +""" +This script contains an example on how to train SpellMapper (SpellcheckingAsrCustomizationModel). +It uses the `examples/nlp/spellchecking_asr_customization/conf/spellchecking_asr_customization_config.yaml` +config file by default. The other option is to set another config file via command +line arguments by `--config-name=CONFIG_FILE_PATH'. Probably it is worth looking +at the example config file to see the list of parameters used for training. + +USAGE Example: + See `examples/nlp/spellchecking_asr_customization/run_training.sh` for training on non-tarred data. + and + `examples/nlp/spellchecking_asr_customization/run_training_tarred.sh` for training on tarred data. + +One (non-tarred) training example should consist of 4 tab-separated columns: + 1. text of ASR-hypothesis + 2. texts of 10 candidates separated by semicolon + 3. 1-based ids of correct candidates, or 0 if none + 4. start/end coordinates of correct candidates (correspond to ids in third column) +Example (in one line): + a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o + d i d i e r _ s a u m o n;a s t r o n o m i e;t r i s t a n _ g u i l l o t;t r i s t e s s e;m o n a d e;c h r i s t i a n;a s t r o n o m e r;s o l o m o n;d i d i d i d i d i;m e r c y + 1 3 + CUSTOM 12 23;CUSTOM 28 41 +""" + +from helpers import MODEL, instantiate_model_and_trainer +from omegaconf import DictConfig, OmegaConf + +from nemo.core.config import hydra_runner +from nemo.utils import logging +from nemo.utils.exp_manager import exp_manager + + +@hydra_runner(config_path="conf", config_name="spellchecking_asr_customization_config") +def main(cfg: DictConfig) -> None: + logging.info(f'Config Params: {OmegaConf.to_yaml(cfg)}') + + # Train the model + if cfg.model.do_training: + logging.info( + "================================================================================================" + ) + logging.info('Start training...') + trainer, model = instantiate_model_and_trainer(cfg, MODEL, True) + spellchecking_exp_manager = cfg.get('exp_manager', None) + exp_manager(trainer, spellchecking_exp_manager) + trainer.fit(model) + logging.info('Training finished!') + + +if __name__ == '__main__': + main() diff --git a/examples/nlp/text_normalization_as_tagging/dataset_preparation/extract_giza_alignments.py b/examples/nlp/text_normalization_as_tagging/dataset_preparation/extract_giza_alignments.py index e2ae48a37a0b..f5a53b1f331d 100644 --- a/examples/nlp/text_normalization_as_tagging/dataset_preparation/extract_giza_alignments.py +++ b/examples/nlp/text_normalization_as_tagging/dataset_preparation/extract_giza_alignments.py @@ -19,9 +19,14 @@ import re from argparse import ArgumentParser -from typing import List, Tuple -import numpy as np +from nemo.collections.nlp.data.text_normalization_as_tagging.utils import ( + check_monotonicity, + fill_alignment_matrix, + get_targets, + get_targets_from_back, +) + parser = ArgumentParser(description='Extract final alignments from GIZA++ alignments') parser.add_argument('--mode', type=str, required=True, help='tn or itn') @@ -34,211 +39,13 @@ args = parser.parse_args() -def fill_alignment_matrix( - fline2: str, fline3: str, gline2: str, gline3: str -) -> Tuple[np.ndarray, List[str], List[str]]: - """Parse Giza++ direct and reverse alignment results and represent them as an alignment matrix - - Args: - fline2: e.g. "_2 0 1 4_" - fline3: e.g. "NULL ({ }) twenty ({ 1 }) fourteen ({ 2 3 4 })" - gline2: e.g. "twenty fourteen" - gline3: e.g. "NULL ({ }) _2 ({ 1 }) 0 ({ }) 1 ({ }) 4_ ({ 2 })" - - Returns: - matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment - the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the - words were aligned only in one direction, 0 - no alignment. - srctokens: e.g. ["twenty", "fourteen"] - dsttokens: e.g. ["_2", "0", "1", "4_"] - - For example, the alignment matrix for the above example may look like: - [[3, 0, 0, 0] - [0, 2, 2, 3]] - """ - if fline2 is None or gline2 is None or fline3 is None or gline3 is None: - raise ValueError(f"empty params") - srctokens = gline2.split() - dsttokens = fline2.split() - pattern = r"([^ ]+) \(\{ ([^\(\{\}\)]*) \}\)" - src2dst = re.findall(pattern, fline3.replace("({ })", "({ })")) - dst2src = re.findall(pattern, gline3.replace("({ })", "({ })")) - if len(src2dst) != len(srctokens) + 1: - raise ValueError( - "length mismatch: len(src2dst)=" - + str(len(src2dst)) - + "; len(srctokens)" - + str(len(srctokens)) - + "\n" - + gline2 - + "\n" - + fline3 - ) - if len(dst2src) != len(dsttokens) + 1: - raise ValueError( - "length mismatch: len(dst2src)=" - + str(len(dst2src)) - + "; len(dsttokens)" - + str(len(dsttokens)) - + "\n" - + fline2 - + "\n" - + gline3 - ) - matrix = np.zeros((len(srctokens), len(dsttokens))) - for i in range(1, len(src2dst)): - token, to_str = src2dst[i] - if to_str == "": - continue - to = list(map(int, to_str.split())) - for t in to: - matrix[i - 1][t - 1] = 2 - - for i in range(1, len(dst2src)): - token, to_str = dst2src[i] - if to_str == "": - continue - to = list(map(int, to_str.split())) - for t in to: - matrix[t - 1][i - 1] += 1 - - return matrix, srctokens, dsttokens - - -def check_monotonicity(matrix: np.ndarray) -> bool: - """Check if alignment is monotonous - i.e. the relative order is preserved (no swaps). - - Args: - matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment - the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the - words were aligned only in one direction, 0 - no alignment. - """ - is_sorted = lambda k: np.all(k[:-1] <= k[1:]) - - a = np.argwhere(matrix == 3) - b = np.argwhere(matrix == 2) - c = np.vstack((a, b)) - d = c[c[:, 1].argsort()] # sort by second column (less important) - d = d[d[:, 0].argsort(kind="mergesort")] - return is_sorted(d[:, 1]) - - -def get_targets(matrix: np.ndarray, dsttokens: List[str]) -> List[str]: - """Join some of the destination tokens, so that their number becomes the same as the number of input words. - Unaligned tokens tend to join to the left aligned token. - - Args: - matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment - the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the - words were aligned only in one direction, 0 - no alignment. - dsttokens: e.g. ["_2", "0", "1", "4_"] - Returns: - targets: list of string tokens, with one-to-one correspondence to matrix.shape[0] - - Example: - If we get - matrix=[[3, 0, 0, 0] - [0, 2, 2, 3]] - dsttokens=["_2", "0", "1", "4_"] - it gives - targets = ["_201", "4_"] - Actually, this is a mistake instead of ["_20", "14_"]. That will be further corrected by regular expressions. - """ - targets = [] - last_covered_dst_id = -1 - for i in range(len(matrix)): - dstlist = [] - for j in range(last_covered_dst_id + 1, len(dsttokens)): - # matrix[i][j] == 3: safe alignment point - if matrix[i][j] == 3 or ( - j == last_covered_dst_id + 1 - and np.all(matrix[i, :] == 0) # if the whole line does not have safe points - and np.all(matrix[:, j] == 0) # and the whole column does not have safe points, match them - ): - if len(targets) == 0: # if this is first safe point, attach left unaligned columns to it, if any - for k in range(0, j): - if np.all(matrix[:, k] == 0): # if column k does not have safe points - dstlist.append(dsttokens[k]) - else: - break - dstlist.append(dsttokens[j]) - last_covered_dst_id = j - for k in range(j + 1, len(dsttokens)): - if np.all(matrix[:, k] == 0): # if column k does not have safe points - dstlist.append(dsttokens[k]) - last_covered_dst_id = k - else: - break - - if len(dstlist) > 0: - if args.mode == "tn": - targets.append("_".join(dstlist)) - else: - targets.append("".join(dstlist)) - else: - targets.append("") - return targets - - -def get_targets_from_back(matrix: np.ndarray, dsttokens: List[str]) -> List[str]: - """Join some of the destination tokens, so that their number becomes the same as the number of input words. - Unaligned tokens tend to join to the right aligned token. - - Args: - matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment - the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the - words were aligned only in one direction, 0 - no alignment. - dsttokens: e.g. ["_2", "0", "1", "4_"] - Returns: - targets: list of string tokens, with one-to-one correspondence to matrix.shape[0] - - Example: - If we get - matrix=[[3, 0, 0, 0] - [0, 2, 2, 3]] - dsttokens=["_2", "0", "1", "4_"] - it gives - targets = ["_2", "014_"] - Actually, this is a mistake instead of ["_20", "14_"]. That will be further corrected by regular expressions. - """ - - targets = [] - last_covered_dst_id = len(dsttokens) - for i in range(len(matrix) - 1, -1, -1): - dstlist = [] - for j in range(last_covered_dst_id - 1, -1, -1): - if matrix[i][j] == 3 or ( - j == last_covered_dst_id - 1 and np.all(matrix[i, :] == 0) and np.all(matrix[:, j] == 0) - ): - if len(targets) == 0: - for k in range(len(dsttokens) - 1, j, -1): - if np.all(matrix[:, k] == 0): - dstlist.append(dsttokens[k]) - else: - break - dstlist.append(dsttokens[j]) - last_covered_dst_id = j - for k in range(j - 1, -1, -1): - if np.all(matrix[:, k] == 0): - dstlist.append(dsttokens[k]) - last_covered_dst_id = k - else: - break - if len(dstlist) > 0: - if args.mode == "tn": - targets.append("_".join(list(reversed(dstlist)))) - else: - targets.append("".join(list(reversed(dstlist)))) - else: - targets.append("") - return list(reversed(targets)) - - def main() -> None: g = open(args.giza_dir + "/GIZA++." + args.giza_suffix, "r", encoding="utf-8") f = open(args.giza_dir + "/GIZA++reverse." + args.giza_suffix, "r", encoding="utf-8") + target_inner_delimiter = "" if args.mode == "tn": g, f = f, g + target_inner_delimiter = "_" out = open(args.giza_dir + "/" + args.out_filename, "w", encoding="utf-8") cache = {} good_count, not_mono_count, not_covered_count, exception_count = 0, 0, 0, 0 @@ -277,8 +84,8 @@ def main() -> None: else: matrix[matrix <= 2] = 0 # leave only 1-to-1 alignment points if check_monotonicity(matrix): - targets = get_targets(matrix, dsttokens) - targets_from_back = get_targets_from_back(matrix, dsttokens) + targets = get_targets(matrix, dsttokens, delimiter=target_inner_delimiter) + targets_from_back = get_targets_from_back(matrix, dsttokens, delimiter=target_inner_delimiter) if len(targets) != len(srctokens): raise ValueError( "targets length doesn't match srctokens length: len(targets)=" diff --git a/nemo/collections/nlp/data/spellchecking_asr_customization/__init__.py b/nemo/collections/nlp/data/spellchecking_asr_customization/__init__.py new file mode 100644 index 000000000000..4e786276108c --- /dev/null +++ b/nemo/collections/nlp/data/spellchecking_asr_customization/__init__.py @@ -0,0 +1,20 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +from nemo.collections.nlp.data.spellchecking_asr_customization.dataset import ( + SpellcheckingAsrCustomizationDataset, + SpellcheckingAsrCustomizationTestDataset, + TarredSpellcheckingAsrCustomizationDataset, +) diff --git a/nemo/collections/nlp/data/spellchecking_asr_customization/bert_example.py b/nemo/collections/nlp/data/spellchecking_asr_customization/bert_example.py new file mode 100644 index 000000000000..803d0eaf8aed --- /dev/null +++ b/nemo/collections/nlp/data/spellchecking_asr_customization/bert_example.py @@ -0,0 +1,593 @@ +# Copyright 2019 The Google Research Authors. +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +from collections import OrderedDict +from os import path +from typing import Dict, List, Optional, Tuple, Union + +from transformers import PreTrainedTokenizerBase + +"""Build BERT Examples from asr hypothesis, customization candidates, target labels, span info. +""" + + +class BertExample(object): + """Class for training and inference examples for BERT. + + Attributes: + features: Feature dictionary. + """ + + def __init__( + self, + input_ids: List[int], + input_mask: List[int], + segment_ids: List[int], + input_ids_for_subwords: List[int], + input_mask_for_subwords: List[int], + segment_ids_for_subwords: List[int], + character_pos_to_subword_pos: List[int], + fragment_indices: List[Tuple[int, int, int]], + labels_mask: List[int], + labels: List[int], + spans: List[Tuple[int, int, int]], + default_label: int, + ) -> None: + """Inputs to the example wrapper + + Args: + input_ids: indices of single characters (treated as subwords) + input_mask: list of bools with 0s in place of input_ids to be masked + segment_ids: list of ints from 0 to 10 to denote the text segment type ( + 0 - for tokens of ASR hypothesis, + 1 - for tokens of the first candidate + ... + 10 - for tokens of the tenth candidate + ) + input_ids_for_subwords: indices of real subwords (as tokenized by bert tokenizer) + input_mask_for_subwords: list of bools with 0s in place of input_ids_for_subwords to be masked + segment_ids_for_subwords: same as segment_ids but for input_ids_for_subwords + character_pos_to_subword_pos: list of size=len(input_ids), value=(position of corresponding subword in input_ids_for_subwords) + fragment_indices: list of tuples (start_position, end_position, candidate_id), end is exclusive, candidate_id can be -1 if not set + labels_mask: bool tensor with 0s in place of label tokens to be masked + labels: indices of semiotic classes which should be predicted from each of the + corresponding input tokens + spans: list of tuples (class_id, start_position, end_position), end is exclusive, class is always 1(CUSTOM) + default_label: The default label + """ + input_len = len(input_ids) + if not ( + input_len == len(input_mask) + and input_len == len(segment_ids) + and input_len == len(labels_mask) + and input_len == len(labels) + and input_len == len(character_pos_to_subword_pos) + ): + raise ValueError("All feature lists should have the same length ({})".format(input_len)) + + input_len_for_subwords = len(input_ids_for_subwords) + if not ( + input_len_for_subwords == len(input_mask_for_subwords) + and input_len_for_subwords == len(segment_ids_for_subwords) + ): + raise ValueError( + "All feature lists for subwords should have the same length ({})".format(input_len_for_subwords) + ) + + self.features = OrderedDict( + [ + ("input_ids", input_ids), + ("input_mask", input_mask), + ("segment_ids", segment_ids), + ("input_ids_for_subwords", input_ids_for_subwords), + ("input_mask_for_subwords", input_mask_for_subwords), + ("segment_ids_for_subwords", segment_ids_for_subwords), + ("character_pos_to_subword_pos", character_pos_to_subword_pos), + ("fragment_indices", fragment_indices), + ("labels_mask", labels_mask), + ("labels", labels), + ("spans", spans), + ] + ) + self._default_label = default_label + + +class BertExampleBuilder(object): + """Builder class for BertExample objects.""" + + def __init__( + self, + label_map: Dict[str, int], + semiotic_classes: Dict[str, int], + tokenizer: PreTrainedTokenizerBase, + max_seq_length: int, + ) -> None: + """Initializes an instance of BertExampleBuilder. + + Args: + label_map: Mapping from tags to tag IDs. + semiotic_classes: Mapping from semiotic classes to their ids. + tokenizer: Tokenizer object. + max_seq_length: Maximum sequence length. + """ + self._label_map = label_map + self._semiotic_classes = semiotic_classes + self._tokenizer = tokenizer + self._max_seq_length = max_seq_length + # one span usually covers one or more words and it only exists for custom phrases, so there are much less spans than characters. + self._max_spans_length = max(4, int(max_seq_length / 20)) + self._pad_id = self._tokenizer.pad_token_id + self._default_label = 0 + + def build_bert_example( + self, hyp: str, ref: str, target: Optional[str] = None, span_info: Optional[str] = None, infer: bool = False + ) -> Optional[BertExample]: + """Constructs a BERT Example. + + Args: + hyp: Hypothesis text. + ref: Candidate customization variants divided by ';' + target: + if infer==False, string of labels (each label is 1-based index of correct candidate) or 0. + if infer==True, it can be None or string of labels (each label is 1-based index of some candidate). In inference this can be used to get corresponding fragments to fragment_indices. + span_info: + string of format "CUSTOM 6 20;CUSTOM 40 51", number of parts corresponds to number of targets. Can be empty if target is 0. + If infer==False, numbers are correct start and end(exclusive) positions of the corresponding target candidate in the text. + If infer==True, numbers are EXPECTED positions in the text. In inference this can be used to get corresponding fragments to fragment_indices. + infer: inference mode + Returns: + BertExample, or None if the conversion from text to tags was infeasible + + Example (infer=False): + hyp: "a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o" + ref: "d i d i e r _ s a u m o n;a s t r o n o m i e;t r i s t a n _ g u i l l o t;t r i s t e s s e;m o n a d e;c h r i s t i a n;a s t r o n o m e r;s o l o m o n;d i d i d i d i d i;m e r c y" + target: "1 3" + span_info: "CUSTOM 12 23;CUSTOM 28 41" + """ + if not ref.count(";") == 9: + raise ValueError("Expect 10 candidates: " + ref) + + span_info_parts = [] + targets = [] + + if len(target) > 0 and target != "0": + span_info_parts = span_info.split(";") + targets = list(map(int, target.split(" "))) + if len(span_info_parts) != len(targets): + raise ValueError( + "len(span_info_parts)=" + + str(len(span_info_parts)) + + " is different from len(target_parts)=" + + str(len(targets)) + ) + + tags = [0 for _ in hyp.split()] + if not infer: + for p, t in zip(span_info_parts, targets): + c, start, end = p.split(" ") + start = int(start) + end = int(end) + tags[start:end] = [t for i in range(end - start)] + + # get input features for characters + (input_ids, input_mask, segment_ids, labels_mask, labels, _, _,) = self._get_input_features( + hyp=hyp, ref=ref, tags=tags + ) + + # get input features for words + hyp_with_words = hyp.replace(" ", "").replace("_", " ") + ref_with_words = ref.replace(" ", "").replace("_", " ") + ( + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + _, + _, + _, + _, + ) = self._get_input_features(hyp=hyp_with_words, ref=ref_with_words, tags=None) + + # used in forward to concatenate subword embeddings to character embeddings + character_pos_to_subword_pos = self._map_characters_to_subwords(input_ids, input_ids_for_subwords) + + fragment_indices = [] + if infer: + # used in inference to take argmax over whole fragments instead of separate characters to get more consistent predictions + fragment_indices = self._get_fragment_indices(hyp, targets, span_info_parts) + + spans = [] + if not infer: + # during training spans are used in validation step to calculate accuracy on whole custom phrases instead of separate characters + spans = self._get_spans(span_info_parts) + + if len(input_ids) > self._max_seq_length or len(spans) > self._max_spans_length: + print( + "Max len exceeded: len(input_ids)=", + len(input_ids), + "; _max_seq_length=", + self._max_seq_length, + "; len(spans)=", + len(spans), + "; _max_spans_length=", + self._max_spans_length, + ) + return None + + example = BertExample( + input_ids=input_ids, + input_mask=input_mask, + segment_ids=segment_ids, + input_ids_for_subwords=input_ids_for_subwords, + input_mask_for_subwords=input_mask_for_subwords, + segment_ids_for_subwords=segment_ids_for_subwords, + character_pos_to_subword_pos=character_pos_to_subword_pos, + fragment_indices=fragment_indices, + labels_mask=labels_mask, + labels=labels, + spans=spans, + default_label=self._default_label, + ) + return example + + def _get_spans(self, span_info_parts: List[str]) -> List[Tuple[int, int, int]]: + """ Converts span_info string into a list of (class_id, start, end) where start, end are coordinates of starting and ending(exclusive) tokens in input_ids of BertExample + + Example: + span_info_parts: ["CUSTOM 37 41", "CUSTOM 47 52", "CUSTOM 42 46", "CUSTOM 0 7"] + result: [(1, 38, 42), (1, 48, 53), (1, 43, 47), (1, 1, 8)] + """ + result_spans = [] + + for p in span_info_parts: + if p == "": + break + c, start, end = p.split(" ") + if c not in self._semiotic_classes: + raise KeyError("class=" + c + " not found in self._semiotic_classes") + cid = self._semiotic_classes[c] + # +1 because this should be indexing on input_ids which has [CLS] token at beginning + start = int(start) + 1 + end = int(end) + 1 + result_spans.append((cid, start, end)) + return result_spans + + def _get_fragment_indices( + self, hyp: str, targets: List[int], span_info_parts: List[str] + ) -> Tuple[List[Tuple[int, int, int]]]: + """ Build fragment indices for real candidates. + This is used only at inference. + After external candidate retrieval we know approximately, where the candidate is located in the text (from the positions of matched n-grams). + In this function we + 1) adjust start/end positions to match word borders (possibly in multiple ways). + 2) generate content for fragment_indices tensor (it will be used during inference to average all predictions inside each fragment). + + Args: + hyp: ASR-hypothesis where space separates single characters (real space is replaced to underscore). + targets: list of candidate ids (only for real candidates, not dummy) + span_info_parts: list of strings of format like "CUSTOM 12 25", corresponding to each of targets, with start/end coordinates in text. + Returns: + List of tuples (start, end, target) where start and end are positions in ASR-hypothesis, target is candidate_id. + Note that returned fragments can be unsorted and can overlap, it's ok. + Example: + hyp: "a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o" + targets: [1 2 3 4 6 7 9] + span_info_parts: ["CUSTOM 12 25", "CUSTOM 0 10", "CUSTOM 27 42", ...], where numbers are EXPECTED start/end positions of corresponding target candidates in the text. These positions will be adjusted in this functuion. + fragment_indices: [(1, 12, 2), (13, 24, 1), (13, 28, 1), ..., (29, 42, 3)] + """ + + fragment_indices = [] + + letters = hyp.split() + + for target, p in zip(targets, span_info_parts): + _, start, end = p.split(" ") + start = int(start) + end = min(int(end), len(hyp)) # guarantee that end is not outside length + + # Adjusting strategy 1: expand both sides to the nearest space. + # Adjust start by finding the nearest left space or beginning of text. If start is already some word beginning, it won't change. + k = start + while k > 0 and letters[k] != '_': + k -= 1 + adjusted_start = k if k == 0 else k + 1 + + # Adjust end by finding the nearest right space. If end is already space or sentence end, it won't change. + k = end + while k < len(letters) and letters[k] != '_': + k += 1 + adjusted_end = k + + # +1 because this should be indexing on input_ids which has [CLS] token at beginning + fragment_indices.append((adjusted_start + 1, adjusted_end + 1, target)) + + # Adjusting strategy 2: try to shrink to the closest space (from left or right or both sides). + # For example, here the candidate "shippers" has a matching n-gram covering part of previous word + # a b o u t _ o u r _ s h i p e r s _ b u t _ y o u _ k n o w + # 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 + expanded_fragment = "".join(letters[adjusted_start:adjusted_end]) + left_space_position = expanded_fragment.find("_") + right_space_position = expanded_fragment.rfind("_") + is_left_shrink = False + is_right_shrink = False + if left_space_position > -1 and left_space_position < len(expanded_fragment) / 2: + # +1 because of CLS token, another +1 to put start position after found space + fragment_indices.append((adjusted_start + 1 + left_space_position + 1, adjusted_end + 1, target)) + is_left_shrink = True + if right_space_position > -1 and right_space_position > len(expanded_fragment) / 2: + fragment_indices.append((adjusted_start + 1, adjusted_start + 1 + right_space_position, target)) + is_right_shrink = True + if is_left_shrink and is_right_shrink: + fragment_indices.append( + (adjusted_start + 1 + left_space_position + 1, adjusted_start + 1 + right_space_position, target) + ) + + return fragment_indices + + def _map_characters_to_subwords(self, input_ids: List[int], input_ids_for_subwords: List[int]) -> List[int]: + """ Maps each single character to the position of its corresponding subword. + + Args: + input_ids: List of character token ids. + input_ids_for_subwords: List of subword token ids. + Returns: + List of subword positions in input_ids_for_subwords. Its length is equal to len(input_ids) + + Example: + input_ids: [101, 1037, 1055, 1056, 1054, 1051, 1050, ..., 1051, 102, 1040, ..., 1050, 102, 1037, ..., 1041, 102, ..., 102] + input_ids_for_subwords: [101, 26357, 2106, 2666, 2061, 8202, 1998, 13012, 16643, 2319, 1043, 7174, 102, 2106, 3771, 7842, 2819, 2239, 102, ..., 102] + result: [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, ... , 45, 46, 46, 46, 46, 46, 47] + """ + character_pos_to_subword_pos = [0 for _ in input_ids] + + ## '[CLS]', 'a', 's', 't', 'r', 'o', 'n', 'o', 'm', 'e', 'r', 's', '_', 'd', 'i', ..., 'l', 'o', '[SEP]', 'd', 'i', 'd', 'i', 'e', 'r', '_', 's', 'a', 'u', 'm', 'o', 'n', ..., '[SEP]' + tokens = self._tokenizer.convert_ids_to_tokens(input_ids) + ## '[CLS]', 'astronomers', 'did', '##ie', 'so', '##mon', 'and', 'tri', '##sti', '##an', 'g', '##llo', '[SEP]', 'did', '##ier', 'sa', '##um', '##on', '[SEP]', 'astro', '##no', '##mie', '[SEP]', 'tristan', 'gui', '##llo', '##t', '[SEP]', ..., '[SEP]', 'mercy', '[SEP]'] + tokens_for_subwords = self._tokenizer.convert_ids_to_tokens(input_ids_for_subwords) + j = 0 # index for tokens_for_subwords + j_offset = 0 # current letter index within subword + for i in range(len(tokens)): + character = tokens[i] + subword = tokens_for_subwords[j] + if character == "[CLS]" and subword == "[CLS]": + character_pos_to_subword_pos[i] = j + j += 1 + continue + if character == "[SEP]" and subword == "[SEP]": + character_pos_to_subword_pos[i] = j + j += 1 + continue + if character == "[CLS]" or character == "[SEP]" or subword == "[CLS]" or subword == "[SEP]": + raise IndexError( + "character[" + + str(i) + + "]=" + + character + + "; subword[" + + str(j) + + ";=" + + subword + + "subwords=" + + str(tokens_for_subwords) + ) + # At this point we expect that + # subword either 1) is a normal first token of a word or 2) starts with "##" (not first word token) + # character either 1) is a normal character or 2) is a space character "_" + if character == "_": + character_pos_to_subword_pos[i] = j - 1 # space is assigned to previous subtoken + continue + if j_offset < len(subword): + if character == subword[j_offset]: + character_pos_to_subword_pos[i] = j + j_offset += 1 + else: + raise IndexError( + "character mismatch:" + + "i=" + + str(i) + + "j=" + + str(j) + + "j_offset=" + + str(j_offset) + + "; len(tokens)=" + + str(len(tokens)) + + "; len(subwords)=" + + str(len(tokens_for_subwords)) + ) + # if subword is finished, increase j + if j_offset >= len(subword): + j += 1 + j_offset = 0 + if j >= len(tokens_for_subwords): + break + if tokens_for_subwords[j].startswith("##"): + j_offset = 2 + # check that all subword tokens are processed + if j < len(tokens_for_subwords): + raise IndexError( + "j=" + + str(j) + + "; len(tokens)=" + + str(len(tokens)) + + "; len(subwords)=" + + str(len(tokens_for_subwords)) + ) + return character_pos_to_subword_pos + + def _get_input_features( + self, hyp: str, ref: str, tags: List[int] + ) -> Tuple[List[int], List[int], List[int], List[int], List[int], List[str], List[int]]: + """Converts given ASR-hypothesis(hyp) and candidate string(ref) to features(token ids, mask, segment ids, etc). + + Args: + hyp: Hypothesis text. + ref: Candidate customization variants divided by ';' + tags: List of labels corresponding to each token of ASR-hypothesis or None when building an example during inference. + Returns: + Features (input_ids, input_mask, segment_ids, labels_mask, labels, hyp_tokens, token_start_indices) + + Note that this method is called both for character-based example and for word-based example (to split to subwords). + + Character-based example: + hyp: "a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o" + ref: "d i d i e r _ s a u m o n;a s t r o n o m i e;t r i s t a n _ g u i l l o t;t r i s t e s s e;m o n a d e;c h r i s t i a n;a s t r o n o m e r;s o l o m o n;d i d i d i d i d i;m e r c y" + tags: "0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 3 3 3 3 3 3 3 3 3 3 3 3 3" + + resulting token sequence: + '[CLS]', 'a', 's', 't', 'r', 'o', 'n', 'o', 'm', 'e', 'r', 's', '_', 'd', 'i', ..., 'l', 'o', '[SEP]', 'd', 'i', 'd', 'i', 'e', 'r', '_', 's', 'a', 'u', 'm', 'o', 'n', ..., '[SEP]' + + Word-based example: + hyp: "astronomers didie somon and tristian gllo" + ref: "didier saumon;astronomie;tristan guillot;tristesse;monade;christian;astronomer;solomon;dididididi;mercy" + tags: None (not used for word-based case) + + resulting token sequence: + '[CLS]', 'astronomers', 'did', '##ie', 'so', '##mon', 'and', 'tri', '##sti', '##an', 'g', '##llo', '[SEP]', 'did', '##ier', 'sa', '##um', '##on', '[SEP]', 'astro', '##no', '##mie', '[SEP]', 'tristan', 'gui', '##llo', '##t', '[SEP]', ..., '[SEP]', 'mercy', '[SEP]'] + """ + + labels_mask = [] + labels = [] + if tags is None: + hyp_tokens, token_start_indices = self._split_to_wordpieces(hyp.split()) + else: + hyp_tokens, labels, token_start_indices = self._split_to_wordpieces_with_labels(hyp.split(), tags) + references = ref.split(";") + all_ref_tokens = [] + all_ref_segment_ids = [] + for i in range(len(references)): + ref_tokens, _ = self._split_to_wordpieces(references[i].split()) + all_ref_tokens.extend(ref_tokens + ["[SEP]"]) + all_ref_segment_ids.extend([i + 1] * (len(ref_tokens) + 1)) + + input_tokens = ["[CLS]"] + hyp_tokens + ["[SEP]"] + all_ref_tokens # ends with [SEP] + input_ids = self._tokenizer.convert_tokens_to_ids(input_tokens) + input_mask = [1] * len(input_ids) + segment_ids = [0] + [0] * len(hyp_tokens) + [0] + all_ref_segment_ids + if len(input_ids) != len(segment_ids): + raise ValueError( + "len(input_ids)=" + + str(len(input_ids)) + + " is different from len(segment_ids)=" + + str(len(segment_ids)) + ) + + if tags: + labels_mask = [0] + [1] * len(labels) + [0] + [0] * len(all_ref_tokens) + labels = [0] + labels + [0] + [0] * len(all_ref_tokens) + return (input_ids, input_mask, segment_ids, labels_mask, labels, hyp_tokens, token_start_indices) + + def _split_to_wordpieces_with_labels( + self, tokens: List[str], labels: List[int] + ) -> Tuple[List[str], List[int], List[int]]: + """Splits tokens (and the labels accordingly) to WordPieces. + + Args: + tokens: Tokens to be split. + labels: Labels (one per token) to be split. + + Returns: + 3-tuple with the split tokens, split labels, and the indices of starting tokens of words + """ + bert_tokens = [] # Original tokens split into wordpieces. + bert_labels = [] # Label for each wordpiece. + # Index of each wordpiece that starts a new token. + token_start_indices = [] + for i, token in enumerate(tokens): + # '+ 1' is because bert_tokens will be prepended by [CLS] token later. + token_start_indices.append(len(bert_tokens) + 1) + pieces = self._tokenizer.tokenize(token) + bert_tokens.extend(pieces) + bert_labels.extend([labels[i]] * len(pieces)) + return bert_tokens, bert_labels, token_start_indices + + def _split_to_wordpieces(self, tokens: List[str]) -> Tuple[List[str], List[int]]: + """Splits tokens to WordPieces. + + Args: + tokens: Tokens to be split. + + Returns: + tuple with the split tokens, and the indices of the WordPieces that start a token. + """ + bert_tokens = [] # Original tokens split into wordpieces. + # Index of each wordpiece that starts a new token. + token_start_indices = [] + for i, token in enumerate(tokens): + # '+ 1' is because bert_tokens will be prepended by [CLS] token later. + token_start_indices.append(len(bert_tokens) + 1) + pieces = self._tokenizer.tokenize(token) + bert_tokens.extend(pieces) + return bert_tokens, token_start_indices + + def read_input_file( + self, input_filename: str, infer: bool = False + ) -> Union[List['BertExample'], Tuple[List['BertExample'], Tuple[str, str]]]: + """Reads in Tab Separated Value file and converts to training/inference-ready examples. + + Args: + example_builder: Instance of BertExampleBuilder + input_filename: Path to the TSV input file. + infer: If true, input examples do not contain target info. + + Returns: + examples: List of converted examples (BertExample). + or + (examples, hyps_refs): If infer==true, returns h + """ + + if not path.exists(input_filename): + raise ValueError("Cannot find file: " + input_filename) + examples = [] # output list of BertExample + hyps_refs = [] # output list of tuples (ASR-hypothesis, candidate_str) + with open(input_filename, 'r') as f: + for line in f: + if len(examples) % 1000 == 0: + logging.info("{} examples processed.".format(len(examples))) + if infer: + parts = line.rstrip('\n').split('\t') + hyp, ref, target, span_info = parts[0], parts[1], None, None + if len(parts) == 4: + target, span_info = parts[2], parts[3] + try: + example = self.build_bert_example(hyp, ref, target=target, span_info=span_info, infer=infer) + except Exception as e: + logging.warning(str(e)) + logging.warning(line) + continue + if example is None: + logging.info("cannot create example: ") + logging.info(line) + continue + hyps_refs.append((hyp, ref)) + examples.append(example) + else: + hyp, ref, target, semiotic_info = line.rstrip('\n').split('\t') + try: + example = self.build_bert_example( + hyp, ref, target=target, span_info=semiotic_info, infer=infer + ) + except Exception as e: + logging.warning(str(e)) + logging.warning(line) + continue + if example is None: + logging.info("cannot create example: ") + logging.info(line) + continue + examples.append(example) + logging.info(f"Done. {len(examples)} examples converted.") + if infer: + return examples, hyps_refs + return examples diff --git a/nemo/collections/nlp/data/spellchecking_asr_customization/dataset.py b/nemo/collections/nlp/data/spellchecking_asr_customization/dataset.py new file mode 100644 index 000000000000..69705ec21b9d --- /dev/null +++ b/nemo/collections/nlp/data/spellchecking_asr_customization/dataset.py @@ -0,0 +1,521 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import pickle +from io import BytesIO +from typing import Dict, List, Optional, Tuple + +import braceexpand +import numpy as np +import torch +import webdataset as wd + +from nemo.collections.nlp.data.spellchecking_asr_customization.bert_example import BertExampleBuilder +from nemo.core.classes.dataset import Dataset, IterableDataset +from nemo.core.neural_types import ChannelType, IntType, LabelsType, MaskType, NeuralType +from nemo.utils import logging + +__all__ = [ + "SpellcheckingAsrCustomizationDataset", + "SpellcheckingAsrCustomizationTestDataset", + "TarredSpellcheckingAsrCustomizationDataset", +] + + +def collate_train_dataset( + batch: List[ + Tuple[ + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + np.ndarray, + ] + ], + pad_token_id: int, +) -> Tuple[ + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, +]: + """collate batch of training items + Args: + batch: A list of tuples of (input_ids, input_mask, segment_ids, input_ids_for_subwords, input_mask_for_subwords, segment_ids_for_subwords, character_pos_to_subword_pos, labels_mask, labels, spans). + pad_token_id: integer id of padding token (to use in padded_input_ids, padded_input_ids_for_subwords) + """ + max_length = 0 + max_length_for_subwords = 0 + max_length_for_spans = 1 # to avoid empty tensor + for ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + spans, + ) in batch: + if len(input_ids) > max_length: + max_length = len(input_ids) + if len(input_ids_for_subwords) > max_length_for_subwords: + max_length_for_subwords = len(input_ids_for_subwords) + if len(spans) > max_length_for_spans: + max_length_for_spans = len(spans) + + padded_input_ids = [] + padded_input_mask = [] + padded_segment_ids = [] + padded_input_ids_for_subwords = [] + padded_input_mask_for_subwords = [] + padded_segment_ids_for_subwords = [] + padded_character_pos_to_subword_pos = [] + padded_labels_mask = [] + padded_labels = [] + padded_spans = [] + for ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + spans, + ) in batch: + if len(input_ids) < max_length: + pad_length = max_length - len(input_ids) + padded_input_ids.append(np.pad(input_ids, pad_width=[0, pad_length], constant_values=pad_token_id)) + padded_input_mask.append(np.pad(input_mask, pad_width=[0, pad_length], constant_values=0)) + padded_segment_ids.append(np.pad(segment_ids, pad_width=[0, pad_length], constant_values=0)) + padded_labels_mask.append(np.pad(labels_mask, pad_width=[0, pad_length], constant_values=0)) + padded_labels.append(np.pad(labels, pad_width=[0, pad_length], constant_values=0)) + padded_character_pos_to_subword_pos.append( + np.pad(character_pos_to_subword_pos, pad_width=[0, pad_length], constant_values=0) + ) + else: + padded_input_ids.append(input_ids) + padded_input_mask.append(input_mask) + padded_segment_ids.append(segment_ids) + padded_labels_mask.append(labels_mask) + padded_labels.append(labels) + padded_character_pos_to_subword_pos.append(character_pos_to_subword_pos) + + if len(input_ids_for_subwords) < max_length_for_subwords: + pad_length = max_length_for_subwords - len(input_ids_for_subwords) + padded_input_ids_for_subwords.append( + np.pad(input_ids_for_subwords, pad_width=[0, pad_length], constant_values=pad_token_id) + ) + padded_input_mask_for_subwords.append( + np.pad(input_mask_for_subwords, pad_width=[0, pad_length], constant_values=0) + ) + padded_segment_ids_for_subwords.append( + np.pad(segment_ids_for_subwords, pad_width=[0, pad_length], constant_values=0) + ) + else: + padded_input_ids_for_subwords.append(input_ids_for_subwords) + padded_input_mask_for_subwords.append(input_mask_for_subwords) + padded_segment_ids_for_subwords.append(segment_ids_for_subwords) + + if len(spans) < max_length_for_spans: + padded_spans.append(np.ones((max_length_for_spans, 3), dtype=int) * -1) # pad value is [-1, -1, -1] + if len(spans) > 0: + padded_spans[-1][: spans.shape[0], : spans.shape[1]] = spans # copy actual spans to the beginning + else: + padded_spans.append(spans) + + return ( + torch.LongTensor(np.array(padded_input_ids)), + torch.LongTensor(np.array(padded_input_mask)), + torch.LongTensor(np.array(padded_segment_ids)), + torch.LongTensor(np.array(padded_input_ids_for_subwords)), + torch.LongTensor(np.array(padded_input_mask_for_subwords)), + torch.LongTensor(np.array(padded_segment_ids_for_subwords)), + torch.LongTensor(np.array(padded_character_pos_to_subword_pos)), + torch.LongTensor(np.array(padded_labels_mask)), + torch.LongTensor(np.array(padded_labels)), + torch.LongTensor(np.array(padded_spans)), + ) + + +def collate_test_dataset( + batch: List[Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]], + pad_token_id: int, +) -> Tuple[ + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, + torch.LongTensor, +]: + """collate batch of test items + Args: + batch: A list of tuples of (input_ids, input_mask, segment_ids, input_ids_for_subwords, input_mask_for_subwords, segment_ids_for_subwords, character_pos_to_subword_pos, fragment_indices). + pad_token_id: integer id of padding token (to use in padded_input_ids, padded_input_ids_for_subwords) + """ + max_length = 0 + max_length_for_subwords = 0 + max_length_for_fragment_indices = 1 # to avoid empty tensor + for ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + fragment_indices, + ) in batch: + if len(input_ids) > max_length: + max_length = len(input_ids) + if len(input_ids_for_subwords) > max_length_for_subwords: + max_length_for_subwords = len(input_ids_for_subwords) + if len(fragment_indices) > max_length_for_fragment_indices: + max_length_for_fragment_indices = len(fragment_indices) + + padded_input_ids = [] + padded_input_mask = [] + padded_segment_ids = [] + padded_input_ids_for_subwords = [] + padded_input_mask_for_subwords = [] + padded_segment_ids_for_subwords = [] + padded_character_pos_to_subword_pos = [] + padded_fragment_indices = [] + for ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + fragment_indices, + ) in batch: + if len(input_ids) < max_length: + pad_length = max_length - len(input_ids) + padded_input_ids.append(np.pad(input_ids, pad_width=[0, pad_length], constant_values=pad_token_id)) + padded_input_mask.append(np.pad(input_mask, pad_width=[0, pad_length], constant_values=0)) + padded_segment_ids.append(np.pad(segment_ids, pad_width=[0, pad_length], constant_values=0)) + padded_character_pos_to_subword_pos.append( + np.pad(character_pos_to_subword_pos, pad_width=[0, pad_length], constant_values=0) + ) + else: + padded_input_ids.append(input_ids) + padded_input_mask.append(input_mask) + padded_segment_ids.append(segment_ids) + padded_character_pos_to_subword_pos.append(character_pos_to_subword_pos) + + if len(input_ids_for_subwords) < max_length_for_subwords: + pad_length = max_length_for_subwords - len(input_ids_for_subwords) + padded_input_ids_for_subwords.append( + np.pad(input_ids_for_subwords, pad_width=[0, pad_length], constant_values=pad_token_id) + ) + padded_input_mask_for_subwords.append( + np.pad(input_mask_for_subwords, pad_width=[0, pad_length], constant_values=0) + ) + padded_segment_ids_for_subwords.append( + np.pad(segment_ids_for_subwords, pad_width=[0, pad_length], constant_values=0) + ) + else: + padded_input_ids_for_subwords.append(input_ids_for_subwords) + padded_input_mask_for_subwords.append(input_mask_for_subwords) + padded_segment_ids_for_subwords.append(segment_ids_for_subwords) + + if len(fragment_indices) < max_length_for_fragment_indices: + # we use [0, 1, 0] as padding value for fragment_indices, it corresponds to [CLS] token, which is ignored and won't affect anything + p = np.zeros((max_length_for_fragment_indices, 3), dtype=int) + p[:, 1] = 1 + p[:, 2] = 0 + padded_fragment_indices.append(p) + if len(fragment_indices) > 0: + padded_fragment_indices[-1][ + : fragment_indices.shape[0], : fragment_indices.shape[1] + ] = fragment_indices # copy actual fragment_indices to the beginning + else: + padded_fragment_indices.append(fragment_indices) + + return ( + torch.LongTensor(np.array(padded_input_ids)), + torch.LongTensor(np.array(padded_input_mask)), + torch.LongTensor(np.array(padded_segment_ids)), + torch.LongTensor(np.array(padded_input_ids_for_subwords)), + torch.LongTensor(np.array(padded_input_mask_for_subwords)), + torch.LongTensor(np.array(padded_segment_ids_for_subwords)), + torch.LongTensor(np.array(padded_character_pos_to_subword_pos)), + torch.LongTensor(np.array(padded_fragment_indices)), + ) + + +class SpellcheckingAsrCustomizationDataset(Dataset): + """ + Dataset as used by the SpellcheckingAsrCustomizationModel for training and validation pipelines. + + Args: + input_file (str): path to tsv-file with data + example_builder: instance of BertExampleBuilder + """ + + @property + def output_types(self) -> Optional[Dict[str, NeuralType]]: + """Returns definitions of module output ports. + """ + return { + "input_ids": NeuralType(('B', 'T'), ChannelType()), + "input_mask": NeuralType(('B', 'T'), MaskType()), + "segment_ids": NeuralType(('B', 'T'), ChannelType()), + "input_ids_for_subwords": NeuralType(('B', 'T'), ChannelType()), + "input_mask_for_subwords": NeuralType(('B', 'T'), MaskType()), + "segment_ids_for_subwords": NeuralType(('B', 'T'), ChannelType()), + "character_pos_to_subword_pos": NeuralType(('B', 'T'), ChannelType()), + "labels_mask": NeuralType(('B', 'T'), MaskType()), + "labels": NeuralType(('B', 'T'), LabelsType()), + "spans": NeuralType(('B', 'T', 'C'), IntType()), + } + + def __init__(self, input_file: str, example_builder: BertExampleBuilder) -> None: + self.example_builder = example_builder + self.examples = self.example_builder.read_input_file(input_file, infer=False) + self.pad_token_id = self.example_builder._pad_id + + def __len__(self): + return len(self.examples) + + def __getitem__(self, idx: int): + example = self.examples[idx] + input_ids = np.array(example.features["input_ids"], dtype=np.int16) + input_mask = np.array(example.features["input_mask"], dtype=np.int8) + segment_ids = np.array(example.features["segment_ids"], dtype=np.int8) + input_ids_for_subwords = np.array(example.features["input_ids_for_subwords"], dtype=np.int16) + input_mask_for_subwords = np.array(example.features["input_mask_for_subwords"], dtype=np.int8) + segment_ids_for_subwords = np.array(example.features["segment_ids_for_subwords"], dtype=np.int8) + character_pos_to_subword_pos = np.array(example.features["character_pos_to_subword_pos"], dtype=np.int16) + labels_mask = np.array(example.features["labels_mask"], dtype=np.int8) + labels = np.array(example.features["labels"], dtype=np.int8) + spans = np.array(example.features["spans"], dtype=np.int16) + return ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + spans, + ) + + def _collate_fn(self, batch): + """collate batch of items + Args: + batch: A list of tuples of (input_ids, input_mask, segment_ids, input_ids_for_subwords, input_mask_for_subwords, segment_ids_for_subwords, character_pos_to_subword_pos, labels_mask, labels, spans). + """ + return collate_train_dataset(batch, pad_token_id=self.pad_token_id) + + +class TarredSpellcheckingAsrCustomizationDataset(IterableDataset): + """ + This Dataset loads training examples from tarred tokenized pickle files. + If using multiple processes the number of shards should be divisible by the number of workers to ensure an + even split among workers. If it is not divisible, logging will give a warning but training will proceed. + Additionally, please note that the len() of this DataLayer is assumed to be the number of tokens + of the text data. Shard strategy is scatter - each node gets a unique set of shards, which are permanently + pre-allocated and never changed at runtime. + Args: + text_tar_filepaths: a string (can be brace-expandable). + shuffle_n (int): How many samples to look ahead and load to be shuffled. + See WebDataset documentation for more details. + Defaults to 0. + global_rank (int): Worker rank, used for partitioning shards. Defaults to 0. + world_size (int): Total number of processes, used for partitioning shards. Defaults to 1. + pad_token_id: id of pad token (used in collate_fn) + """ + + def __init__( + self, + text_tar_filepaths: str, + shuffle_n: int = 1, + global_rank: int = 0, + world_size: int = 1, + pad_token_id: int = -1, # use real value or get error + ): + super(TarredSpellcheckingAsrCustomizationDataset, self).__init__() + if pad_token_id < 0: + raise ValueError("use non-negative pad_token_id: " + str(pad_token_id)) + + self.pad_token_id = pad_token_id + + # Replace '(', '[', '<' and '_OP_' with '{' + brace_keys_open = ['(', '[', '<', '_OP_'] + for bkey in brace_keys_open: + if bkey in text_tar_filepaths: + text_tar_filepaths = text_tar_filepaths.replace(bkey, "{") + + # Replace ')', ']', '>' and '_CL_' with '}' + brace_keys_close = [')', ']', '>', '_CL_'] + for bkey in brace_keys_close: + if bkey in text_tar_filepaths: + text_tar_filepaths = text_tar_filepaths.replace(bkey, "}") + + # Brace expand + text_tar_filepaths = list(braceexpand.braceexpand(text_tar_filepaths)) + + logging.info("Tarred dataset shards will be scattered evenly across all nodes.") + if len(text_tar_filepaths) % world_size != 0: + logging.warning( + f"Number of shards in tarred dataset ({len(text_tar_filepaths)}) is not divisible " + f"by number of distributed workers ({world_size}). " + f"Some shards will not be used ({len(text_tar_filepaths) % world_size})." + ) + begin_idx = (len(text_tar_filepaths) // world_size) * global_rank + end_idx = begin_idx + (len(text_tar_filepaths) // world_size) + logging.info('Begin Index : %d' % (begin_idx)) + logging.info('End Index : %d' % (end_idx)) + text_tar_filepaths = text_tar_filepaths[begin_idx:end_idx] + logging.info( + "Partitioning tarred dataset: process (%d) taking shards [%d, %d)", global_rank, begin_idx, end_idx + ) + + self.tarpath = text_tar_filepaths + + # Put together WebDataset + self._dataset = wd.WebDataset(urls=text_tar_filepaths, nodesplitter=None) + + if shuffle_n > 0: + self._dataset = self._dataset.shuffle(shuffle_n, initial=shuffle_n) + else: + logging.info("WebDataset will not shuffle files within the tar files.") + + self._dataset = self._dataset.rename(pkl='pkl', key='__key__').to_tuple('pkl', 'key').map(f=self._build_sample) + + def _build_sample(self, fname): + # Load file + pkl_file, _ = fname + pkl_file = BytesIO(pkl_file) + data = pickle.load(pkl_file) + pkl_file.close() + input_ids = data["input_ids"] + input_mask = data["input_mask"] + segment_ids = data["segment_ids"] + input_ids_for_subwords = data["input_ids_for_subwords"] + input_mask_for_subwords = data["input_mask_for_subwords"] + segment_ids_for_subwords = data["segment_ids_for_subwords"] + character_pos_to_subword_pos = data["character_pos_to_subword_pos"] + labels_mask = data["labels_mask"] + labels = data["labels"] + spans = data["spans"] + + return ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + spans, + ) + + def __iter__(self): + return self._dataset.__iter__() + + def _collate_fn(self, batch): + """collate batch of items + Args: + batch: A list of tuples of (input_ids, input_mask, segment_ids, input_ids_for_subwords, input_mask_for_subwords, segment_ids_for_subwords, character_pos_to_subword_pos, labels_mask, labels, spans). + """ + return collate_train_dataset(batch, pad_token_id=self.pad_token_id) + + +class SpellcheckingAsrCustomizationTestDataset(Dataset): + """ + Dataset for inference pipeline. + + Args: + sents: list of strings + example_builder: instance of BertExampleBuilder + """ + + @property + def output_types(self) -> Optional[Dict[str, NeuralType]]: + """Returns definitions of module output ports. + """ + return { + "input_ids": NeuralType(('B', 'T'), ChannelType()), + "input_mask": NeuralType(('B', 'T'), MaskType()), + "segment_ids": NeuralType(('B', 'T'), ChannelType()), + "input_ids_for_subwords": NeuralType(('B', 'T'), ChannelType()), + "input_mask_for_subwords": NeuralType(('B', 'T'), MaskType()), + "segment_ids_for_subwords": NeuralType(('B', 'T'), ChannelType()), + "character_pos_to_subword_pos": NeuralType(('B', 'T'), ChannelType()), + "fragment_indices": NeuralType(('B', 'T', 'C'), IntType()), + } + + def __init__(self, input_file: str, example_builder: BertExampleBuilder) -> None: + self.example_builder = example_builder + self.examples, self.hyps_refs = self.example_builder.read_input_file(input_file, infer=True) + self.pad_token_id = self.example_builder._pad_id + + def __len__(self): + return len(self.examples) + + def __getitem__(self, idx: int): + example = self.examples[idx] + input_ids = np.array(example.features["input_ids"]) + input_mask = np.array(example.features["input_mask"]) + segment_ids = np.array(example.features["segment_ids"]) + input_ids_for_subwords = np.array(example.features["input_ids_for_subwords"]) + input_mask_for_subwords = np.array(example.features["input_mask_for_subwords"]) + segment_ids_for_subwords = np.array(example.features["segment_ids_for_subwords"]) + character_pos_to_subword_pos = np.array(example.features["character_pos_to_subword_pos"], dtype=np.int64) + fragment_indices = np.array(example.features["fragment_indices"], dtype=np.int16) + return ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + fragment_indices, + ) + + def _collate_fn(self, batch): + """collate batch of items + Args: + batch: A list of tuples of (input_ids, input_mask, segment_ids, input_ids_for_subwords, input_mask_for_subwords, segment_ids_for_subwords, character_pos_to_subword_pos). + """ + return collate_test_dataset(batch, pad_token_id=self.pad_token_id) diff --git a/nemo/collections/nlp/data/spellchecking_asr_customization/utils.py b/nemo/collections/nlp/data/spellchecking_asr_customization/utils.py new file mode 100644 index 000000000000..cda551189d78 --- /dev/null +++ b/nemo/collections/nlp/data/spellchecking_asr_customization/utils.py @@ -0,0 +1,845 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import json +import math +import random +import re +from collections import defaultdict, namedtuple +from typing import Dict, List, Set, Tuple, Union + +import numpy as np +from numba import jit + +"""Utility functions for Spellchecking ASR Customization.""" + + +def replace_diacritics(text): + text = re.sub(r"[éèëēêęěė]", "e", text) # latin + text = re.sub(r"[ё]", "е", text) # cyrillic + text = re.sub(r"[ãâāáäăàąåạảǎ]", "a", text) + text = re.sub(r"[úūüùưûů]", "u", text) + text = re.sub(r"[ôōóöõòőø]", "o", text) + text = re.sub(r"[ćçč]", "c", text) + text = re.sub(r"[ïīíîıì]", "i", text) + text = re.sub(r"[ñńňņ]", "n", text) + text = re.sub(r"[țťţ]", "t", text) + text = re.sub(r"[łľļ]", "l", text) + text = re.sub(r"[żžź]", "z", text) + text = re.sub(r"[ğ]", "g", text) + text = re.sub(r"[ďđ]", "d", text) + text = re.sub(r"[ķ]", "k", text) + text = re.sub(r"[ř]", "r", text) + text = re.sub(r"[ý]", "y", text) + text = re.sub(r"[æ]", "ae", text) + text = re.sub(r"[œ]", "oe", text) + text = re.sub(r"[șşšś]", "s", text) + return text + + +def load_ngram_mappings(input_name: str, max_misspelled_freq: int = 1000000000) -> Tuple[defaultdict, Set]: + """Loads n-gram mapping vocabularies in form required by dynamic programming + Args: + input_name: file with n-gram mappings + max_misspelled_freq: threshold on misspelled n-gram frequency + Returns: + vocab: dict {key=original_ngram, value=dict{key=misspelled_ngram, value=frequency}} + ban_ngram: set of banned misspelled n-grams + + Input format: + u t o u+i t o 49 8145 114 + u t o t e 63 8145 16970 + u t o o+_ t o 42 8145 1807 + """ + vocab = defaultdict(dict) + ban_ngram = set() + + with open(input_name, "r", encoding="utf-8") as f: + for line in f: + orig, misspelled, joint_freq, orig_freq, misspelled_freq = line.strip().split("\t") + if orig == "" or misspelled == "": + raise ValueError("Empty n-gram: orig=" + orig + "; misspelled=" + misspelled) + misspelled = misspelled.replace("", "=") + if misspelled.replace("=", "").strip() == "": # skip if resulting ngram doesn't contain any real character + continue + if int(misspelled_freq) > max_misspelled_freq: + ban_ngram.add(misspelled + " ") # space at the end is required within get_index function + vocab[orig][misspelled] = int(joint_freq) / int(orig_freq) + return vocab, ban_ngram + + +def load_ngram_mappings_for_dp(input_name: str) -> Tuple[defaultdict, defaultdict, defaultdict, int]: + """Loads n-gram mapping vocabularies in form required by dynamic programming + Args: + input_name: file with n-gram mappings + Returns: + joint_vocab: dict where key=(original_ngram, misspelled_ngram), value=frequency + orig_vocab: dict where key=original_ngram, value=frequency + misspelled_vocab: dict where key=misspelled_ngram, value=frequency + max_len: maximum n-gram length seen in vocabulary + + Input format: original \t misspelled \t joint_freq \t original_freq \t misspelled_freq + u t o u+i t o 49 8145 114 + u t o t e 63 8145 16970 + u t o o+_ t o 42 8145 1807 + """ + joint_vocab = defaultdict(int) + orig_vocab = defaultdict(int) + misspelled_vocab = defaultdict(int) + max_len = 0 + with open(input_name, "r", encoding="utf-8") as f: + for line in f: + orig, misspelled, joint_freq, _, _ = line.strip().split("\t") + if orig == "" or misspelled == "": + raise ValueError("Emty n-gram: orig=" + orig + "; misspelled=" + misspelled) + misspelled = misspelled.replace("", " ").replace("+", " ") + misspelled = " ".join(misspelled.split()) + if misspelled == "": # skip if resulting ngram doesn't contain any real character + continue + max_len = max(max_len, orig.count(" ") + 1, misspelled.count(" ") + 1) + joint_vocab[(orig, misspelled)] += int(joint_freq) + orig_vocab[orig] += int(joint_freq) + misspelled_vocab[misspelled] += int(joint_freq) + return joint_vocab, orig_vocab, misspelled_vocab, max_len + + +def get_alignment_by_dp( + ref_phrase: str, hyp_phrase: str, dp_data: Tuple[defaultdict, defaultdict, defaultdict, int] +) -> List[Tuple[str, str, float, float, int, int, int]]: + """Get best alignment path between a reference and (possibly) misspelled phrase using n-gram mappings vocabulary. + Args: + ref_phrase: candidate reference phrase (letters separated by space, real space replaced by underscore) + hyp_phrase: (possibly) misspelled phrase (letters separated by space, real space replaced by underscore) + dp_data: n-gram mapping vocabularies used by dynamic programming + Returns: + list of tuples (hyp_ngram, ref_ngram, logprob, sum_logprob, joint_freq, orig_freq, misspelled_freq) + This is best alignment path. + + Example: + ref_phrase: "a n h y d r i d e" + hyp_phrase: "a n d _ h y d r o d" + + Result: + [("*", "*", 0.0, 0.0, 0, 0, 0) + ("a n d _ h", "a n h", -2.34, -2.34, 226, 2338, 2203) + ("y d r o", "y d r i", -2.95, -5.29, 11, 211, 1584) + ("d", "d e", -1.99, -7.28, 60610, 444714, 2450334) + ] + Final path score is in path[-1][3]: -7.28 + Note that the order of ref_phrase and hyp_phrase matters, because n-gram mappings vocabulary is not symmetrical. + """ + joint_vocab, orig_vocab, misspelled_vocab, max_len = dp_data + hyp_letters = ["*"] + hyp_phrase.split() + ref_letters = ["*"] + ref_phrase.split() + DpInfo = namedtuple( + "DpInfo", ["hyp_pos", "ref_pos", "best_hyp_ngram_len", "best_ref_ngram_len", "score", "sum_score"] + ) + history = defaultdict(DpInfo) + history[(0, 0)] = DpInfo( + hyp_pos=0, ref_pos=0, best_hyp_ngram_len=1, best_ref_ngram_len=1, score=0.0, sum_score=0.0 + ) + for hyp_pos in range(len(hyp_letters)): + for ref_pos in range(len(ref_letters)): + if hyp_pos == 0 and ref_pos == 0: # cell (0, 0) is already defined + continue + # consider cell (hyp_pos, ref_pos) and find best path to get there + best_hyp_ngram_len = 0 + best_ref_ngram_len = 0 + best_ngram_score = float("-inf") + best_sum_score = float("-inf") + # loop over paths ending on non-empty ngram mapping + for hyp_ngram_len in range(1, 1 + min(max_len, hyp_pos + 1)): + hyp_ngram = " ".join(hyp_letters[(hyp_pos - hyp_ngram_len + 1) : (hyp_pos + 1)]) + for ref_ngram_len in range(1, 1 + min(max_len, ref_pos + 1)): + ref_ngram = " ".join(ref_letters[(ref_pos - ref_ngram_len + 1) : (ref_pos + 1)]) + if (ref_ngram, hyp_ngram) not in joint_vocab: + continue + joint_freq = joint_vocab[(ref_ngram, hyp_ngram)] + orig_freq = orig_vocab.get(ref_ngram, 1) + ngram_score = math.log(joint_freq / orig_freq) + previous_cell = (hyp_pos - hyp_ngram_len, ref_pos - ref_ngram_len) + if previous_cell not in history: + print("cell ", previous_cell, "does not exist") + continue + previous_score = history[previous_cell].sum_score + sum_score = ngram_score + previous_score + if sum_score > best_sum_score: + best_sum_score = sum_score + best_ngram_score = ngram_score + best_hyp_ngram_len = hyp_ngram_len + best_ref_ngram_len = ref_ngram_len + # loop over two variants with deletion of one character + deletion_score = -6.0 + insertion_score = -6.0 + if hyp_pos > 0: + previous_cell = (hyp_pos - 1, ref_pos) + previous_score = history[previous_cell].sum_score + sum_score = deletion_score + previous_score + if sum_score > best_sum_score: + best_sum_score = sum_score + best_ngram_score = deletion_score + best_hyp_ngram_len = 1 + best_ref_ngram_len = 0 + + if ref_pos > 0: + previous_cell = (hyp_pos, ref_pos - 1) + previous_score = history[previous_cell].sum_score + sum_score = insertion_score + previous_score + if sum_score > best_sum_score: + best_sum_score = sum_score + best_ngram_score = insertion_score + best_hyp_ngram_len = 0 + best_ref_ngram_len = 1 + + if best_hyp_ngram_len == 0 and best_ref_ngram_len == 0: + raise ValueError("best_hyp_ngram_len = 0 and best_ref_ngram_len = 0") + + # save cell to history + history[(hyp_pos, ref_pos)] = DpInfo( + hyp_pos=hyp_pos, + ref_pos=ref_pos, + best_hyp_ngram_len=best_hyp_ngram_len, + best_ref_ngram_len=best_ref_ngram_len, + score=best_ngram_score, + sum_score=best_sum_score, + ) + # now trace back on best path starting from last positions + path = [] + hyp_pos = len(hyp_letters) - 1 + ref_pos = len(ref_letters) - 1 + cell_info = history[(hyp_pos, ref_pos)] + path.append(cell_info) + while hyp_pos > 0 or ref_pos > 0: + hyp_pos -= cell_info.best_hyp_ngram_len + ref_pos -= cell_info.best_ref_ngram_len + cell_info = history[(hyp_pos, ref_pos)] + path.append(cell_info) + + result = [] + for info in reversed(path): + hyp_ngram = " ".join(hyp_letters[(info.hyp_pos - info.best_hyp_ngram_len + 1) : (info.hyp_pos + 1)]) + ref_ngram = " ".join(ref_letters[(info.ref_pos - info.best_ref_ngram_len + 1) : (info.ref_pos + 1)]) + joint_freq = joint_vocab.get((ref_ngram, hyp_ngram), 0) + orig_freq = orig_vocab.get(ref_ngram, 0) + misspelled_freq = misspelled_vocab.get(hyp_ngram, 0) + result.append((hyp_ngram, ref_ngram, info.score, info.sum_score, joint_freq, orig_freq, misspelled_freq)) + return result + + +def get_index( + custom_phrases: List[str], + vocab: defaultdict, + ban_ngram_global: Set[str], + min_log_prob: float = -4.0, + max_phrases_per_ngram: int = 100, +) -> Tuple[List[str], Dict[str, List[Tuple[int, int, int, float]]]]: + """Given a restricted vocabulary of replacements, + loops through custom phrases, + generates all possible conversions and creates index. + + Args: + custom_phrases: list of all custom phrases, characters should be split by space, real space replaced to underscore. + vocab: n-gram mappings vocabulary - dict {key=original_ngram, value=dict{key=misspelled_ngram, value=frequency}} + ban_ngram_global: set of banned misspelled n-grams + min_log_prob: minimum log probability, after which we stop growing this n-gram. + max_phrases_per_ngram: maximum phrases that we allow to store per one n-gram. N-grams exceeding that quantity get banned. + + Returns: + phrases - list of phrases. Position in this list is used as phrase_id. + ngram2phrases - resulting index, i.e. dict where key=ngram, value=list of tuples (phrase_id, begin_pos, size, logprob) + """ + + ban_ngram_local = set() # these ngrams are banned only for given custom_phrases + ngram_to_phrase_and_position = defaultdict(list) + + for custom_phrase in custom_phrases: + inputs = custom_phrase.split(" ") + begin = 0 + index_keys = [{} for _ in inputs] # key - letter ngram, index - beginning positions in phrase + + for begin in range(len(inputs)): + for end in range(begin + 1, min(len(inputs) + 1, begin + 5)): + inp = " ".join(inputs[begin:end]) + if inp not in vocab: + continue + for rep in vocab[inp]: + lp = math.log(vocab[inp][rep]) + + for b in range(max(0, end - 5), end): # try to grow previous ngrams with new replacement + new_ngrams = {} + for ngram in index_keys[b]: + lp_prev = index_keys[b][ngram] + if len(ngram) + len(rep) <= 10 and b + ngram.count(" ") == begin: + if lp_prev + lp > min_log_prob: + new_ngrams[ngram + rep + " "] = lp_prev + lp + index_keys[b].update(new_ngrams) # join two dictionaries + # add current replacement as ngram + if lp > min_log_prob: + index_keys[begin][rep + " "] = lp + + for b in range(len(index_keys)): + for ngram, lp in sorted(index_keys[b].items(), key=lambda item: item[1], reverse=True): + if ngram in ban_ngram_global: # here ngram ends with a space + continue + real_length = ngram.count(" ") + ngram = ngram.replace("+", " ").replace("=", " ") + ngram = " ".join(ngram.split()) # here ngram doesn't end with a space anymore + if ngram + " " in ban_ngram_global: # this can happen after deletion of + and = + continue + if ngram in ban_ngram_local: + continue + ngram_to_phrase_and_position[ngram].append((custom_phrase, b, real_length, lp)) + if len(ngram_to_phrase_and_position[ngram]) > max_phrases_per_ngram: + ban_ngram_local.add(ngram) + del ngram_to_phrase_and_position[ngram] + continue + + phrases = [] # id to phrase + phrase2id = {} # phrase to id + ngram2phrases = defaultdict(list) # ngram to list of tuples (phrase_id, begin, length, logprob) + + for ngram in ngram_to_phrase_and_position: + for phrase, b, length, lp in ngram_to_phrase_and_position[ngram]: + if phrase not in phrase2id: + phrases.append(phrase) + phrase2id[phrase] = len(phrases) - 1 + ngram2phrases[ngram].append((phrase2id[phrase], b, length, lp)) + + return phrases, ngram2phrases + + +def load_index(input_name: str) -> Tuple[List[str], Dict[str, List[Tuple[int, int, int, float]]]]: + """ Load index from file + Args: + input_name: file with index + Returns: + phrases: List of all phrases in custom vocabulary. Position corresponds to phrase_id. + ngram2phrases: dict where key=ngram, value=list of tuples (phrase_id, begin_pos, size, logprob) + """ + phrases = [] # id to phrase + phrase2id = {} # phrase to id + ngram2phrases = defaultdict(list) # ngram to list of tuples (phrase_id, begin_pos, size, logprob) + with open(input_name, "r", encoding="utf-8") as f: + for line in f: + ngram, phrase, b, size, lp = line.split("\t") + b = int(b) + size = int(size) + lp = float(lp) + if phrase not in phrase2id: + phrases.append(phrase) + phrase2id[phrase] = len(phrases) - 1 + ngram2phrases[ngram].append((phrase2id[phrase], b, size, lp)) + return phrases, ngram2phrases + + +def search_in_index( + ngram2phrases: Dict[str, List[Tuple[int, int, int, float]]], phrases: List[str], letters: Union[str, List[str]] +) -> Tuple[np.ndarray, List[Set[str]]]: + """ Function used to search in index + + Args: + ngram2phrases: dict where key=ngram, value=list of tuples (phrase_id, begin_pos, size, logprob) + phrases: List of all phrases in custom vocabulary. Position corresponds to phrase_id. + letters: list of letters of ASR-hypothesis. Should not contain spaces - real spaces should be replaced with underscores. + + Returns: + phrases2positions: a matrix of size (len(phrases), len(letters)). + It is filled with 1.0 (hits) on intersection of letter n-grams and phrases that are indexed by these n-grams, 0.0 - elsewhere. + It is used later to find phrases with many hits within a contiguous window - potential matching candidates. + position2ngrams: positions in ASR-hypothesis mapped to sets of ngrams starting from that position. + It is used later to check how well each found candidate is covered by n-grams (to avoid cases where some repeating n-gram gives many hits to a phrase, but the phrase itself is not well covered). + """ + + if " " in letters: + raise ValueError("letters should not contain space: " + str(letters)) + + phrases2positions = np.zeros((len(phrases), len(letters)), dtype=float) + # positions mapped to sets of ngrams starting from that position + position2ngrams = [set() for _ in range(len(letters))] + + begin = 0 + for begin in range(len(letters)): + for end in range(begin + 1, min(len(letters) + 1, begin + 7)): + ngram = " ".join(letters[begin:end]) + if ngram not in ngram2phrases: + continue + for phrase_id, b, size, lp in ngram2phrases[ngram]: + phrases2positions[phrase_id, begin:end] = 1.0 + position2ngrams[begin].add(ngram) + return phrases2positions, position2ngrams + + +@jit(nopython=True) # Set "nopython" mode for best performance, equivalent to @njit +def get_all_candidates_coverage(phrases, phrases2positions): + """Get maximum hit coverage for each phrase - within a moving window of length of the phrase. + Args: + phrases: List of all phrases in custom vocabulary. Position corresponds to phrase_id. + phrases2positions: a matrix of size (len(phrases), len(ASR-hypothesis)). + It is filled with 1.0 (hits) on intersection of letter n-grams and phrases that are indexed by these n-grams, 0.0 - elsewhere. + Returns: + candidate2coverage: list of size len(phrases) containing coverage (0.0 to 1.0) in best window. + candidate2position: list of size len(phrases) containing starting position of best window. + """ + candidate2coverage = [0.0] * len(phrases) + candidate2position = [-1] * len(phrases) + + for i in range(len(phrases)): + phrase_length = phrases[i].count(" ") + 1 + all_coverage = np.sum(phrases2positions[i]) / phrase_length + # if total coverage on whole ASR-hypothesis is too small, there is no sense in using moving window + if all_coverage < 0.4: + continue + moving_sum = np.sum(phrases2positions[i, 0:phrase_length]) + max_sum = moving_sum + best_pos = 0 + for pos in range(1, phrases2positions.shape[1] - phrase_length + 1): + moving_sum -= phrases2positions[i, pos - 1] + moving_sum += phrases2positions[i, pos + phrase_length - 1] + if moving_sum > max_sum: + max_sum = moving_sum + best_pos = pos + + coverage = max_sum / (phrase_length + 2) # smoothing + candidate2coverage[i] = coverage + candidate2position[i] = best_pos + return candidate2coverage, candidate2position + + +def get_candidates( + ngram2phrases: Dict[str, List[Tuple[int, int, int, float]]], + phrases: List[str], + letters: Union[str, List[str]], + pool_for_random_candidates: List[str], + min_phrase_coverage: float = 0.8, +) -> List[Tuple[str, int, int, float, float]]: + """Given an index of custom vocabulary and an ASR-hypothesis retrieve 10 candidates. + Args: + ngram2phrases: dict where key=ngram, value=list of tuples (phrase_id, begin_pos, size, logprob) + phrases: List of all phrases in custom vocabulary. Position corresponds to phrase_id. + letters: list of letters of ASR-hypothesis. Should not contain spaces - real spaces should be replaced with underscores. + pool_for_random_candidates: large list of strings, from which to sample random candidates in case when there are less than 10 real candidates + min_phrase_coverage: We discard candidates which are not covered by n-grams to at least to this extent + (to avoid cases where some repeating n-gram gives many hits to a phrase, but the phrase itself is not well covered). + Returns: + candidates: list of tuples (candidate_text, approximate_begin_position, length, coverage of window in ASR-hypothesis, coverage of phrase itself). + """ + phrases2positions, position2ngrams = search_in_index(ngram2phrases, phrases, letters) + candidate2coverage, candidate2position = get_all_candidates_coverage(phrases, phrases2positions) + + # mask for each custom phrase, how many which symbols are covered by input ngrams + phrases2coveredsymbols = [[0 for x in phrases[i].split(" ")] for i in range(len(phrases))] + candidates = [] + k = 0 + for idx, coverage in sorted(enumerate(candidate2coverage), key=lambda item: item[1], reverse=True): + begin = candidate2position[idx] # this is most likely beginning of this candidate + phrase_length = phrases[idx].count(" ") + 1 + for pos in range(begin, begin + phrase_length): + # we do not know exact end of custom phrase in text, it can be different from phrase length + if pos >= len(position2ngrams): + break + for ngram in position2ngrams[pos]: + for phrase_id, b, size, lp in ngram2phrases[ngram]: + if phrase_id != idx: + continue + for ppos in range(b, b + size): + if ppos >= phrase_length: + break + phrases2coveredsymbols[phrase_id][ppos] = 1 + k += 1 + if k > 100: + break + real_coverage = sum(phrases2coveredsymbols[idx]) / len(phrases2coveredsymbols[idx]) + if real_coverage < min_phrase_coverage: + continue + candidates.append((phrases[idx], begin, phrase_length, coverage, real_coverage)) + + # no need to process this sentence further if it does not contain any real candidates + if len(candidates) == 0: + print("WARNING: no real candidates", candidates) + return [] + + while len(candidates) < 10: + dummy = random.choice(pool_for_random_candidates) + dummy = " ".join(list(dummy.replace(" ", "_"))) + candidates.append((dummy, -1, dummy.count(" ") + 1, 0.0, 0.0)) + + candidates = candidates[:10] + random.shuffle(candidates) + if len(candidates) != 10: + print("WARNING: cannot get 10 candidates", candidates) + return [] + + return candidates + + +def read_spellmapper_predictions(filename: str) -> List[Tuple[str, List[Tuple[int, int, str, float]], List[int]]]: + """Read results of SpellMapper inference from file. + Args: + filename: file with SpellMapper results + Returns: + list of tuples (sent, list of fragment predictions, list of letter predictions) + One fragment prediction is a tuple (begin, end, replacement_text, prob) + """ + results = [] + with open(filename, "r", encoding="utf-8") as f: + for line in f: + text, candidate_str, fragment_predictions_str, letter_predictions_str = line.strip().split("\t") + text = text.replace(" ", "").replace("_", " ") + candidate_str = candidate_str.replace(" ", "").replace("_", " ") + candidates = candidate_str.split(";") + letter_predictions = list(map(int, letter_predictions_str.split())) + if len(candidates) != 10: + raise IndexError("expect 10 candidates, got: ", len(candidates)) + if len(text) != len(letter_predictions): + raise IndexError("len(text)=", len(text), "; len(letter_predictions)=", len(letter_predictions)) + replacements = [] + if fragment_predictions_str != "": + for prediction in fragment_predictions_str.split(";"): + begin, end, candidate_id, prob = prediction.split(" ") + begin = int(begin) + end = int(end) + candidate_id = int(candidate_id) + prob = float(prob) + replacements.append((begin, end, candidates[candidate_id - 1], prob)) + replacements.sort() # it will sort by begin, then by end + results.append((text, replacements, letter_predictions)) + return results + + +def substitute_replacements_in_text( + text: str, replacements: List[Tuple[int, int, str, float]], replace_hyphen_to_space: bool +) -> str: + """Substitute replacements to the input text, iterating from end to beginning, so that indexing does not change. + Note that we expect intersecting replacements to be already filtered. + Args: + text: sentence; + replacements: list of replacements, each is a tuple (begin, end, text, probability); + replace_hyphen_to_space: if True, hyphens in replacements will be converted to spaces; + Returns: + corrected sentence + """ + replacements.sort() + last_begin = len(text) + 1 + corrected_text = text + for begin, end, candidate, prob in reversed(replacements): + if end > last_begin: + print("WARNING: skip intersecting replacement [", candidate, "] in text: ", text) + continue + if replace_hyphen_to_space: + candidate = candidate.replace("-", " ") + corrected_text = corrected_text[:begin] + candidate + corrected_text[end:] + last_begin = begin + return corrected_text + + +def apply_replacements_to_text( + text: str, + replacements: List[Tuple[int, int, str, float]], + min_prob: float = 0.5, + replace_hyphen_to_space: bool = False, + dp_data: Tuple[defaultdict, defaultdict, defaultdict, int] = None, + min_dp_score_per_symbol: float = -99.9, +) -> str: + """Filter and apply replacements to the input sentence. + Args: + text: input sentence; + replacements: list of proposed replacements (probably intersecting), each is a tuple (begin, end, text, probability); + min_prob: threshold on replacement probability; + replace_hyphen_to_space: if True, hyphens in replacements will be converted to spaces; + dp_data: n-gram mapping vocabularies used by dynamic programming, if None - dynamic programming is not used; + min_dp_score_per_symbol: threshold on dynamic programming sum score averaged by hypothesis length + Returns: + corrected sentence + """ + # sort replacements by positions + replacements.sort() + # filter replacements + # Note that we do not skip replacements with same text, otherwise intersecting candidates with lower probability can win + filtered_replacements = [] + for j in range(len(replacements)): + replacement = replacements[j] + begin, end, candidate, prob = replacement + fragment = text[begin:end] + candidate_spaced = " ".join(list(candidate.replace(" ", "_"))) + fragment_spaced = " ".join(list(fragment.replace(" ", "_"))) + # apply penalty if candidate length is bigger than fragment length + # to avoid cases like "forward-looking" replacing "looking" in "forward looking" resulting in "forward forward looking" + if len(candidate) > len(fragment): + penalty = len(fragment) / len(candidate) + prob *= penalty + # skip replacement with low probability + if prob < min_prob: + continue + # skip replacements with some predefined templates, e.g. "*'s" => "*s" + if check_banned_replacements(fragment, candidate): + continue + if dp_data is not None: + path = get_alignment_by_dp(candidate_spaced, fragment_spaced, dp_data) + # path[-1][3] is the sum of logprobs for best path of dynamic programming: divide sum_score by length + if path[-1][3] / (len(fragment)) < min_dp_score_per_symbol: + continue + + # skip replacement if it intersects with previous replacement and has lower probability, otherwise remove previous replacement + if len(filtered_replacements) > 0 and filtered_replacements[-1][1] > begin: + if filtered_replacements[-1][3] > prob: + continue + else: + filtered_replacements.pop() + filtered_replacements.append((begin, end, candidate, prob)) + + return substitute_replacements_in_text(text, filtered_replacements, replace_hyphen_to_space) + + +def update_manifest_with_spellmapper_corrections( + input_manifest_name: str, + short2full_name: str, + output_manifest_name: str, + spellmapper_results_name: str, + min_prob: float = 0.5, + replace_hyphen_to_space: bool = True, + field_name: str = "pred_text", + use_dp: bool = True, + ngram_mappings: Union[str, None] = None, + min_dp_score_per_symbol: float = -1.5, +) -> None: + """Post-process SpellMapper predictions and write corrected sentence to the specified field of nemo manifest. + The previous content of this field will be copied to "*_before_correction" field. + If the sentence was split into fragments before running SpellMapper, all replacements will be first gathered together and then applied to the original long sentence. + Args: + input_manifest_name: input nemo manifest; + short2full_name: text file with two columns: short_sent \t full_sent; + output_manifest_name: output nemo manifest; + spellmapper_results_name: text file with SpellMapper inference results; + min_prob: threshold on replacement probability; + replace_hyphen_to_space: if True, hyphens in replacements will be converted to spaces; + field_name: name of json field whose text we want to correct; + use_dp: bool = If True, additional replacement filtering will be applied using dynamic programming (works slow); + ngram_mappings: file with n-gram mappings, only needed if use_dp=True + min_dp_score_per_symbol: threshold on dynamic programming sum score averaged by hypothesis length + """ + short2full_sent = defaultdict(list) + sent2corrections = defaultdict(dict) + with open(short2full_name, "r", encoding="utf-8") as f: + for line in f: + s = line.strip() + short_sent, full_sent = s.split("\t") + short2full_sent[short_sent].append(full_sent) + sent2corrections[full_sent] = [] + + spellmapper_results = read_spellmapper_predictions(spellmapper_results_name) + dp_data = None + if use_dp: + dp_data = load_ngram_mappings_for_dp(ngram_mappings) + + for text, replacements, _ in spellmapper_results: + short_sent = text + if short_sent not in short2full_sent: + continue + # it can happen that one short sentence occurred in multiple full sentences + for full_sent in short2full_sent[short_sent]: + offset = full_sent.find(short_sent) + for begin, end, candidate, prob in replacements: + sent2corrections[full_sent].append((begin + offset, end + offset, candidate, prob)) + + out = open(output_manifest_name, "w", encoding="utf-8") + with open(input_manifest_name, "r", encoding="utf-8") as f: + for line in f: + record = json.loads(line.strip()) + sent = record[field_name] + record[field_name + "_before_correction"] = record[field_name] + if sent in sent2corrections: + record[field_name] = apply_replacements_to_text( + sent, + sent2corrections[sent], + min_prob=min_prob, + replace_hyphen_to_space=replace_hyphen_to_space, + dp_data=dp_data, + min_dp_score_per_symbol=min_dp_score_per_symbol, + ) + out.write(json.dumps(record) + "\n") + out.close() + + +def extract_and_split_text_from_manifest( + input_name: str, output_name: str, field_name: str = "pred_text", len_in_words: int = 16, step_in_words: int = 8 +) -> None: + """Extract text of the specified field in nemo manifest and split it into fragments (possibly with intersection). + The result is saved to a text file with two columns: short_sent \t full_sent. + This is useful if we want to process shorter sentences and then apply the results to the original long sentence. + Args: + input_name: input nemo manifest, + output_name: output text file, + field_name: name of json field from which we extract the sentence text, + len_in_words: maximum number of words in a fragment, + step_in_words: on how many words we move at each step. + For example, if the len_in_words=16 and step_in_words=8 the fragments will be intersected by half. + """ + short2full_sent = set() + with open(input_name, "r", encoding="utf-8") as f: + for line in f: + record = json.loads(line.strip()) + sent = record[field_name] + if " " in sent: + raise ValueError("found multiple space in: " + sent) + words = sent.split() + for i in range(0, len(words), step_in_words): + short_sent = " ".join(words[i : i + len_in_words]) + short2full_sent.add((short_sent, sent)) + + with open(output_name, "w", encoding="utf-8") as out: + for short_sent, full_sent in short2full_sent: + out.write(short_sent + "\t" + full_sent + "\n") + + +def check_banned_replacements(src: str, dst: str) -> bool: + """This function is used to check is a pair of words/phrases is matching some common template that we don't want to replace with one another. + Args: + src: first phrase + dst: second phrase + Returns True if this replacement should be banned. + """ + # customers' => customer's + if src.endswith("s'") and dst.endswith("'s") and src[0:-2] == dst[0:-2]: + return True + # customer's => customers' + if src.endswith("'s") and dst.endswith("s'") and src[0:-2] == dst[0:-2]: + return True + # customers => customer's + if src.endswith("s") and dst.endswith("'s") and src[0:-1] == dst[0:-2]: + return True + # customer's => customers + if src.endswith("'s") and dst.endswith("s") and src[0:-2] == dst[0:-1]: + return True + # customers => customers' + if src.endswith("s") and dst.endswith("s'") and src[0:-1] == dst[0:-2]: + return True + # customers' => customers + if src.endswith("s'") and dst.endswith("s") and src[0:-2] == dst[0:-1]: + return True + # utilities => utility's + if src.endswith("ies") and dst.endswith("y's") and src[0:-3] == dst[0:-3]: + return True + # utility's => utilities + if src.endswith("y's") and dst.endswith("ies") and src[0:-3] == dst[0:-3]: + return True + # utilities => utility + if src.endswith("ies") and dst.endswith("y") and src[0:-3] == dst[0:-1]: + return True + # utility => utilities + if src.endswith("y") and dst.endswith("ies") and src[0:-1] == dst[0:-3]: + return True + # group is => group's + if src.endswith(" is") and dst.endswith("'s") and src[0:-3] == dst[0:-2]: + return True + # group's => group is + if src.endswith("'s") and dst.endswith(" is") and src[0:-2] == dst[0:-3]: + return True + # trex's => trex + if src.endswith("'s") and src[0:-2] == dst: + return True + # trex => trex's + if dst.endswith("'s") and dst[0:-2] == src: + return True + # increases => increase (but trimass => trimas is ok) + if src.endswith("s") and (not src.endswith("ss")) and src[0:-1] == dst: + return True + # increase => increases ((but trimas => trimass is ok)) + if dst.endswith("s") and (not dst.endswith("ss")) and dst[0:-1] == src: + return True + # anticipate => anticipated + if src.endswith("e") and dst.endswith("ed") and src[0:-1] == dst[0:-2]: + return True + # anticipated => anticipate + if src.endswith("ed") and dst.endswith("e") and src[0:-2] == dst[0:-1]: + return True + # regarded => regard + if src.endswith("ed") and src[0:-2] == dst: + return True + # regard => regarded + if dst.endswith("ed") and dst[0:-2] == src: + return True + # longer => long + if src.endswith("er") and src[0:-2] == dst: + return True + # long => longer + if dst.endswith("er") and dst[0:-2] == src: + return True + # discussed => discussing + if src.endswith("ed") and dst.endswith("ing") and src[0:-2] == dst[0:-3]: + return True + # discussing => discussed + if src.endswith("ing") and dst.endswith("ed") and src[0:-3] == dst[0:-2]: + return True + # discussion => discussing + if src.endswith("ion") and dst.endswith("ing") and src[0:-3] == dst[0:-3]: + return True + # discussing => discussion + if src.endswith("ing") and dst.endswith("ion") and src[0:-3] == dst[0:-3]: + return True + # dispensers => dispensing + if src.endswith("ers") and dst.endswith("ing") and src[0:-3] == dst[0:-3]: + return True + # dispensing => dispensers + if src.endswith("ing") and dst.endswith("ers") and src[0:-3] == dst[0:-3]: + return True + # discussion => discussed + if src.endswith("ion") and dst.endswith("ed") and src[0:-3] == dst[0:-2]: + return True + # discussed => discussion + if src.endswith("ed") and dst.endswith("ion") and src[0:-2] == dst[0:-3]: + return True + # incremental => increment + if src.endswith("ntal") and dst.endswith("nt") and src[0:-4] == dst[0:-2]: + return True + # increment => incremental + if src.endswith("nt") and dst.endswith("ntal") and src[0:-2] == dst[0:-4]: + return True + # delivery => deliverer + if src.endswith("ery") and dst.endswith("erer") and src[0:-3] == dst[0:-4]: + return True + # deliverer => delivery + if src.endswith("erer") and dst.endswith("ery") and src[0:-4] == dst[0:-3]: + return True + # comparably => comparable + if src.endswith("bly") and dst.endswith("ble") and src[0:-3] == dst[0:-3]: + return True + # comparable => comparably + if src.endswith("ble") and dst.endswith("bly") and src[0:-3] == dst[0:-3]: + return True + # beautiful => beautifully + if src.endswith("l") and dst.endswith("lly") and src[0:-1] == dst[0:-3]: + return True + # beautifully => beautiful + if src.endswith("lly") and dst.endswith("l") and src[0:-3] == dst[0:-1]: + return True + # america => american + if src.endswith("a") and dst.endswith("an") and src[0:-1] == dst[0:-2]: + return True + # american => america + if src.endswith("an") and dst.endswith("a") and src[0:-2] == dst[0:-1]: + return True + # reinvesting => investing + if src.startswith("re") and src[2:] == dst: + return True + # investing => reinvesting + if dst.startswith("re") and dst[2:] == src: + return True + # outperformance => performance + if src.startswith("out") and src[3:] == dst: + return True + # performance => outperformance + if dst.startswith("out") and dst[3:] == src: + return True + return False diff --git a/nemo/collections/nlp/data/text_normalization_as_tagging/utils.py b/nemo/collections/nlp/data/text_normalization_as_tagging/utils.py index 253f7a41c703..9d5f5b7b23ad 100644 --- a/nemo/collections/nlp/data/text_normalization_as_tagging/utils.py +++ b/nemo/collections/nlp/data/text_normalization_as_tagging/utils.py @@ -17,6 +17,8 @@ from itertools import groupby from typing import Dict, List, Tuple +import numpy as np + """Utility functions for Thutmose Tagger.""" @@ -305,3 +307,197 @@ def get_src_and_dst_for_alignment( ) return written_str, spoken, " ".join(same_begin), " ".join(same_end) + + +def fill_alignment_matrix( + fline2: str, fline3: str, gline2: str, gline3: str +) -> Tuple[np.ndarray, List[str], List[str]]: + """Parse Giza++ direct and reverse alignment results and represent them as an alignment matrix + + Args: + fline2: e.g. "_2 0 1 4_" + fline3: e.g. "NULL ({ }) twenty ({ 1 }) fourteen ({ 2 3 4 })" + gline2: e.g. "twenty fourteen" + gline3: e.g. "NULL ({ }) _2 ({ 1 }) 0 ({ }) 1 ({ }) 4_ ({ 2 })" + + Returns: + matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment + the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the + words were aligned only in one direction, 0 - no alignment. + srctokens: e.g. ["twenty", "fourteen"] + dsttokens: e.g. ["_2", "0", "1", "4_"] + + For example, the alignment matrix for the above example may look like: + [[3, 0, 0, 0] + [0, 2, 2, 3]] + """ + if fline2 is None or gline2 is None or fline3 is None or gline3 is None: + raise ValueError(f"empty params") + srctokens = gline2.split() + dsttokens = fline2.split() + pattern = r"([^ ]+) \(\{ ([^\(\{\}\)]*) \}\)" + src2dst = re.findall(pattern, fline3.replace("({ })", "({ })")) + dst2src = re.findall(pattern, gline3.replace("({ })", "({ })")) + if len(src2dst) != len(srctokens) + 1: + raise ValueError( + "length mismatch: len(src2dst)=" + + str(len(src2dst)) + + "; len(srctokens)" + + str(len(srctokens)) + + "\n" + + gline2 + + "\n" + + fline3 + ) + if len(dst2src) != len(dsttokens) + 1: + raise ValueError( + "length mismatch: len(dst2src)=" + + str(len(dst2src)) + + "; len(dsttokens)" + + str(len(dsttokens)) + + "\n" + + fline2 + + "\n" + + gline3 + ) + matrix = np.zeros((len(srctokens), len(dsttokens))) + for i in range(1, len(src2dst)): + token, to_str = src2dst[i] + if to_str == "": + continue + to = list(map(int, to_str.split())) + for t in to: + matrix[i - 1][t - 1] = 2 + + for i in range(1, len(dst2src)): + token, to_str = dst2src[i] + if to_str == "": + continue + to = list(map(int, to_str.split())) + for t in to: + matrix[t - 1][i - 1] += 1 + + return matrix, srctokens, dsttokens + + +def check_monotonicity(matrix: np.ndarray) -> bool: + """Check if alignment is monotonous - i.e. the relative order is preserved (no swaps). + + Args: + matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment + the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the + words were aligned only in one direction, 0 - no alignment. + """ + is_sorted = lambda k: np.all(k[:-1] <= k[1:]) + + a = np.argwhere(matrix == 3) + b = np.argwhere(matrix == 2) + c = np.vstack((a, b)) + d = c[c[:, 1].argsort()] # sort by second column (less important) + d = d[d[:, 0].argsort(kind="mergesort")] + return is_sorted(d[:, 1]) + + +def get_targets(matrix: np.ndarray, dsttokens: List[str], delimiter: str) -> List[str]: + """Join some of the destination tokens, so that their number becomes the same as the number of input words. + Unaligned tokens tend to join to the left aligned token. + + Args: + matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment + the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the + words were aligned only in one direction, 0 - no alignment. + dsttokens: e.g. ["_2", "0", "1", "4_"] + Returns: + targets: list of string tokens, with one-to-one correspondence to matrix.shape[0] + + Example: + If we get + matrix=[[3, 0, 0, 0] + [0, 2, 2, 3]] + dsttokens=["_2", "0", "1", "4_"] + it gives + targets = ["_201", "4_"] + Actually, this is a mistake instead of ["_20", "14_"]. That will be further corrected by regular expressions. + """ + targets = [] + last_covered_dst_id = -1 + for i in range(len(matrix)): + dstlist = [] + for j in range(last_covered_dst_id + 1, len(dsttokens)): + # matrix[i][j] == 3: safe alignment point + if matrix[i][j] == 3 or ( + j == last_covered_dst_id + 1 + and np.all(matrix[i, :] == 0) # if the whole line does not have safe points + and np.all(matrix[:, j] == 0) # and the whole column does not have safe points, match them + ): + if len(targets) == 0: # if this is first safe point, attach left unaligned columns to it, if any + for k in range(0, j): + if np.all(matrix[:, k] == 0): # if column k does not have safe points + dstlist.append(dsttokens[k]) + else: + break + dstlist.append(dsttokens[j]) + last_covered_dst_id = j + for k in range(j + 1, len(dsttokens)): + if np.all(matrix[:, k] == 0): # if column k does not have safe points + dstlist.append(dsttokens[k]) + last_covered_dst_id = k + else: + break + + if len(dstlist) > 0: + targets.append(delimiter.join(dstlist)) + else: + targets.append("") + return targets + + +def get_targets_from_back(matrix: np.ndarray, dsttokens: List[str], delimiter: str) -> List[str]: + """Join some of the destination tokens, so that their number becomes the same as the number of input words. + Unaligned tokens tend to join to the right aligned token. + + Args: + matrix: a numpy array of shape (src_len, dst_len) filled with [0, 1, 2, 3], where 3 means a reliable alignment + the corresponding words were aligned to one another in direct and reverse alignment runs, 1 and 2 mean that the + words were aligned only in one direction, 0 - no alignment. + dsttokens: e.g. ["_2", "0", "1", "4_"] + Returns: + targets: list of string tokens, with one-to-one correspondence to matrix.shape[0] + + Example: + If we get + matrix=[[3, 0, 0, 0] + [0, 2, 2, 3]] + dsttokens=["_2", "0", "1", "4_"] + it gives + targets = ["_2", "014_"] + Actually, this is a mistake instead of ["_20", "14_"]. That will be further corrected by regular expressions. + """ + + targets = [] + last_covered_dst_id = len(dsttokens) + for i in range(len(matrix) - 1, -1, -1): + dstlist = [] + for j in range(last_covered_dst_id - 1, -1, -1): + if matrix[i][j] == 3 or ( + j == last_covered_dst_id - 1 and np.all(matrix[i, :] == 0) and np.all(matrix[:, j] == 0) + ): + if len(targets) == 0: + for k in range(len(dsttokens) - 1, j, -1): + if np.all(matrix[:, k] == 0): + dstlist.append(dsttokens[k]) + else: + break + dstlist.append(dsttokens[j]) + last_covered_dst_id = j + for k in range(j - 1, -1, -1): + if np.all(matrix[:, k] == 0): + dstlist.append(dsttokens[k]) + last_covered_dst_id = k + else: + break + if len(dstlist) > 0: + targets.append(delimiter.join(list(reversed(dstlist)))) + else: + targets.append("") + return list(reversed(targets)) diff --git a/nemo/collections/nlp/models/__init__.py b/nemo/collections/nlp/models/__init__.py index 90e692a238a6..75b48f64df13 100644 --- a/nemo/collections/nlp/models/__init__.py +++ b/nemo/collections/nlp/models/__init__.py @@ -30,6 +30,7 @@ from nemo.collections.nlp.models.language_modeling.transformer_lm_model import TransformerLMModel from nemo.collections.nlp.models.machine_translation import MTEncDecModel from nemo.collections.nlp.models.question_answering.qa_model import QAModel +from nemo.collections.nlp.models.spellchecking_asr_customization import SpellcheckingAsrCustomizationModel from nemo.collections.nlp.models.text2sparql.text2sparql_model import Text2SparqlModel from nemo.collections.nlp.models.text_classification import TextClassificationModel from nemo.collections.nlp.models.text_normalization_as_tagging import ThutmoseTaggerModel diff --git a/nemo/collections/nlp/models/spellchecking_asr_customization/__init__.py b/nemo/collections/nlp/models/spellchecking_asr_customization/__init__.py new file mode 100644 index 000000000000..5e94de32e9aa --- /dev/null +++ b/nemo/collections/nlp/models/spellchecking_asr_customization/__init__.py @@ -0,0 +1,18 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +from nemo.collections.nlp.models.spellchecking_asr_customization.spellchecking_model import ( + SpellcheckingAsrCustomizationModel, +) diff --git a/nemo/collections/nlp/models/spellchecking_asr_customization/spellchecking_model.py b/nemo/collections/nlp/models/spellchecking_asr_customization/spellchecking_model.py new file mode 100644 index 000000000000..fc889de2dc63 --- /dev/null +++ b/nemo/collections/nlp/models/spellchecking_asr_customization/spellchecking_model.py @@ -0,0 +1,526 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +from time import perf_counter +from typing import Dict, Optional + +import torch +from omegaconf import DictConfig +from pytorch_lightning import Trainer + +from nemo.collections.common.losses import CrossEntropyLoss +from nemo.collections.nlp.data.spellchecking_asr_customization import ( + SpellcheckingAsrCustomizationDataset, + SpellcheckingAsrCustomizationTestDataset, + TarredSpellcheckingAsrCustomizationDataset, + bert_example, +) +from nemo.collections.nlp.data.text_normalization_as_tagging.utils import read_label_map +from nemo.collections.nlp.metrics.classification_report import ClassificationReport +from nemo.collections.nlp.models.nlp_model import NLPModel +from nemo.collections.nlp.modules.common.token_classifier import TokenClassifier +from nemo.collections.nlp.parts.utils_funcs import tensor2list +from nemo.core.classes.common import PretrainedModelInfo, typecheck +from nemo.core.neural_types import LogitsType, NeuralType +from nemo.utils import logging +from nemo.utils.decorators import experimental + +__all__ = ["SpellcheckingAsrCustomizationModel"] + + +@experimental +class SpellcheckingAsrCustomizationModel(NLPModel): + """ + BERT-based model for Spellchecking ASR Customization. + It takes as input ASR hypothesis and candidate customization entries. + It labels the hypothesis with correct entry index or 0. + Example input: [CLS] a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o [SEP] d i d i e r _ s a u m o n [SEP] a s t r o n o m i e [SEP] t r i s t a n _ g u i l l o t [SEP] ... + Input segments: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 + Example output: 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 3 3 3 3 3 3 3 3 3 3 3 3 3 0 ... + """ + + @property + def output_types(self) -> Optional[Dict[str, NeuralType]]: + return { + "logits": NeuralType(('B', 'T', 'D'), LogitsType()), + } + + @property + def input_module(self): + return self + + @property + def output_module(self): + return self + + def __init__(self, cfg: DictConfig, trainer: Trainer = None) -> None: + super().__init__(cfg=cfg, trainer=trainer) + + # Label map contains 11 labels: 0 for nothing, 1..10 for target candidate ids + label_map_file = self.register_artifact("label_map", cfg.label_map, verify_src_exists=True) + + # Semiotic classes for this model consist only of classes CUSTOM(means fragment containing custom candidate) and PLAIN (any other single-character fragment) + # They are used only during validation step, to calculate accuracy for CUSTOM and PLAIN classes separately + semiotic_classes_file = self.register_artifact( + "semiotic_classes", cfg.semiotic_classes, verify_src_exists=True + ) + self.label_map = read_label_map(label_map_file) + self.semiotic_classes = read_label_map(semiotic_classes_file) + + self.num_labels = len(self.label_map) + self.num_semiotic_labels = len(self.semiotic_classes) + self.id_2_tag = {tag_id: tag for tag, tag_id in self.label_map.items()} + self.id_2_semiotic = {semiotic_id: semiotic for semiotic, semiotic_id in self.semiotic_classes.items()} + self.max_sequence_len = cfg.get('max_sequence_len', self.tokenizer.tokenizer.model_max_length) + + # Setup to track metrics + # We will have (len(self.semiotic_classes) + 1) labels. + # Last one stands for WRONG (span in which the predicted tags don't match the labels) + # This is needed to feed the sequence of classes to classification_report during validation + label_ids = self.semiotic_classes.copy() + label_ids["WRONG"] = len(self.semiotic_classes) + self.tag_classification_report = ClassificationReport( + len(self.semiotic_classes) + 1, label_ids=label_ids, mode='micro', dist_sync_on_step=True + ) + + self.hidden_size = cfg.hidden_size + + # hidden size is doubled because in forward we concatenate embeddings for characters and embeddings for subwords + self.logits = TokenClassifier( + self.hidden_size * 2, num_classes=self.num_labels, num_layers=1, log_softmax=False, dropout=0.1 + ) + + self.loss_fn = CrossEntropyLoss(logits_ndim=3) + + self.builder = bert_example.BertExampleBuilder( + self.label_map, self.semiotic_classes, self.tokenizer.tokenizer, self.max_sequence_len + ) + + @typecheck() + def forward( + self, + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + ): + """ + Same BERT-based model is used to calculate embeddings for sequence of single characters and for sequence of subwords. + Then we concatenate subword embeddings to each character corresponding to this subword. + We return logits for each character x 11 labels: 0 - character doesn't belong to any candidate, 1..10 - character belongs to candidate with this id. + + # Arguments + input_ids: token_ids for single characters; .shape = [batch_size, char_seq_len]; .dtype = int64 + input_mask: mask for input_ids(1 - real, 0 - padding); .shape = [batch_size, char_seq_len]; .dtype = int64 + segment_ids: segment types for input_ids (0 - ASR-hypothesis, 1..10 - candidate); .shape = [batch_size, char_seq_len]; .dtype = int64 + input_ids_for_subwords: token_ids for subwords; .shape = [batch_size, subword_seq_len]; .dtype = int64 + input_mask_for_subwords: mask for input_ids_for_subwords(1 - real, 0 - padding); .shape = [batch_size, subword_seq_len]; .dtype = int64 + segment_ids_for_subwords: segment types for input_ids_for_subwords (0 - ASR-hypothesis, 1..10 - candidate); .shape = [batch_size, subword_seq_len]; .dtype = int64 + character_pos_to_subword_pos: tensor mapping character position in the input sequence to subword position; .shape = [batch_size, char_seq_len]; .dtype = int64 + """ + + # src_hiddens.shape = [batch_size, char_seq_len, bert_hidden_size]; .dtype=float32 + src_hiddens = self.bert_model(input_ids=input_ids, token_type_ids=segment_ids, attention_mask=input_mask) + # src_hiddens_for_subwords.shape = [batch_size, subword_seq_len, bert_hidden_size]; .dtype=float32 + src_hiddens_for_subwords = self.bert_model( + input_ids=input_ids_for_subwords, + token_type_ids=segment_ids_for_subwords, + attention_mask=input_mask_for_subwords, + ) + + # Next three commands concatenate subword embeddings to each character embedding of the corresponding subword + # index.shape = [batch_size, char_seq_len, bert_hidden_size]; .dtype=int64 + index = character_pos_to_subword_pos.unsqueeze(-1).expand((-1, -1, src_hiddens_for_subwords.shape[2])) + # src_hiddens_2.shape = [batch_size, char_seq_len, bert_hidden_size]; .dtype=float32 + src_hiddens_2 = torch.gather(src_hiddens_for_subwords, 1, index) + # src_hiddens.shape = [batch_size, char_seq_len, bert_hidden_size * 2]; .dtype=float32 + src_hiddens = torch.cat((src_hiddens, src_hiddens_2), 2) + + # logits.shape = [batch_size, char_seq_len, num_labels]; num_labels=11: ids from 0 to 10; .dtype=float32 + logits = self.logits(hidden_states=src_hiddens) + return logits + + # Training + def training_step(self, batch, batch_idx): + """ + Lightning calls this inside the training loop with the data from the training dataloader + passed in as `batch`. + """ + + ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + _, + ) = batch + logits = self.forward( + input_ids=input_ids, + input_mask=input_mask, + segment_ids=segment_ids, + input_ids_for_subwords=input_ids_for_subwords, + input_mask_for_subwords=input_mask_for_subwords, + segment_ids_for_subwords=segment_ids_for_subwords, + character_pos_to_subword_pos=character_pos_to_subword_pos, + ) + loss = self.loss_fn(logits=logits, labels=labels, loss_mask=labels_mask) + lr = self._optimizer.param_groups[0]['lr'] + self.log('train_loss', loss) + self.log('lr', lr, prog_bar=True) + return {'loss': loss, 'lr': lr} + + # Validation and Testing + def validation_step(self, batch, batch_idx): + """ + Lightning calls this inside the validation loop with the data from the validation dataloader + passed in as `batch`. + """ + ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + labels_mask, + labels, + spans, + ) = batch + logits = self.forward( + input_ids=input_ids, + input_mask=input_mask, + segment_ids=segment_ids, + input_ids_for_subwords=input_ids_for_subwords, + input_mask_for_subwords=input_mask_for_subwords, + segment_ids_for_subwords=segment_ids_for_subwords, + character_pos_to_subword_pos=character_pos_to_subword_pos, + ) + tag_preds = torch.argmax(logits, dim=2) + + # Update tag classification_report + for input_mask_seq, segment_seq, prediction_seq, label_seq, span_seq in zip( + input_mask.tolist(), segment_ids.tolist(), tag_preds.tolist(), labels.tolist(), spans.tolist() + ): + # Here we want to track whether the predicted output matches ground truth labels for each whole span. + # We construct the special input for classification report, for example: + # span_labels = [PLAIN, PLAIN, PLAIN, PLAIN, CUSTOM, CUSTOM] + # span_predictions = [PLAIN, WRONG, PLAIN, PLAIN, WRONG, CUSTOM] + # Note that the number of PLAIN and CUSTOM occurrences in the report is not comparable, + # because PLAIN is for characters, and CUSTOM is for phrases. + span_labels = [] + span_predictions = [] + plain_cid = self.semiotic_classes["PLAIN"] + wrong_cid = self.tag_classification_report.num_classes - 1 + + # First we loop through all predictions for input characters with label=0, they are regarded as separate spans with PLAIN class. + # It either stays as PLAIN if the model prediction is 0, or turns to WRONG. + for i in range(len(segment_seq)): + if input_mask_seq[i] == 0: + continue + if segment_seq[i] > 0: # token does not belong to ASR-hypothesis => it's over + break + if label_seq[i] == 0: + span_labels.append(plain_cid) + if prediction_seq[i] == 0: + span_predictions.append(plain_cid) + else: + span_predictions.append(wrong_cid) + # if label_seq[i] != 0 then it belongs to CUSTOM span and will be handled later + + # Second we loop through spans tensor which contains only spans for CUSTOM class. + # It stays as CUSTOM if all predictions for the whole span are equal to the labels, otherwise it turns to WRONG. + for cid, start, end in span_seq: + if cid == -1: + break + span_labels.append(cid) + if prediction_seq[start:end] == label_seq[start:end]: + span_predictions.append(cid) + else: + span_predictions.append(wrong_cid) + + if len(span_labels) != len(span_predictions): + raise ValueError( + "Length mismatch: len(span_labels)=" + + str(len(span_labels)) + + "; len(span_predictions)=" + + str(len(span_predictions)) + ) + self.tag_classification_report( + torch.tensor(span_predictions).to(self.device), torch.tensor(span_labels).to(self.device) + ) + + val_loss = self.loss_fn(logits=logits, labels=labels, loss_mask=labels_mask) + return {'val_loss': val_loss} + + def validation_epoch_end(self, outputs): + """ + Called at the end of validation to aggregate outputs. + :param outputs: list of individual outputs of each validation step. + """ + avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() + + # Calculate metrics and classification report + # Note that in our task recall = accuracy, and the recall column is the per class accuracy + _, tag_accuracy, _, tag_report = self.tag_classification_report.compute() + + logging.info("Total tag accuracy: " + str(tag_accuracy)) + logging.info(tag_report) + + self.log('val_loss', avg_loss, prog_bar=True) + self.log('tag accuracy', tag_accuracy) + + self.tag_classification_report.reset() + + def test_step(self, batch, batch_idx): + """ + Lightning calls this inside the test loop with the data from the test dataloader + passed in as `batch`. + """ + return self.validation_step(batch, batch_idx) + + def test_epoch_end(self, outputs): + """ + Called at the end of test to aggregate outputs. + :param outputs: list of individual outputs of each test step. + """ + return self.validation_epoch_end(outputs) + + # Functions for inference + + @torch.no_grad() + def infer(self, dataloader_cfg: DictConfig, input_name: str, output_name: str) -> None: + """ Main function for Inference + + Args: + dataloader_cfg: config for dataloader + input_name: Input file with tab-separated text records. Each record consists of 2 items: + - ASR hypothesis + - candidate phrases separated by semicolon + output_name: Output file with tab-separated text records. Each record consists of 2 items: + - ASR hypothesis + - candidate phrases separated by semicolon + - list of possible replacements with probabilities (start, pos, candidate_id, prob), separated by semicolon + - list of labels, predicted for each letter (for debug purposes) + + Returns: None + """ + mode = self.training + device = "cuda" if torch.cuda.is_available() else "cpu" + + try: + # Switch model to evaluation mode + self.eval() + self.to(device) + logging_level = logging.get_verbosity() + logging.set_verbosity(logging.WARNING) + infer_datalayer = self._setup_infer_dataloader(dataloader_cfg, input_name) + + all_tag_preds = ( + [] + ) # list(size=number of sentences) of lists(size=number of letters) of tag predictions (best candidate_id for each letter) + all_possible_replacements = ( + [] + ) # list(size=number of sentences) of lists(size=number of potential replacements) of tuples(start, pos, candidate_id, prob) + for batch in iter(infer_datalayer): + ( + input_ids, + input_mask, + segment_ids, + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + character_pos_to_subword_pos, + fragment_indices, + ) = batch + + # tag_logits.shape = [batch_size, char_seq_len, num_labels]; num_labels=11: ids from 0 to 10; .dtype=float32 + tag_logits = self.forward( + input_ids=input_ids.to(self.device), + input_mask=input_mask.to(self.device), + segment_ids=segment_ids.to(self.device), + input_ids_for_subwords=input_ids_for_subwords.to(self.device), + input_mask_for_subwords=input_mask_for_subwords.to(self.device), + segment_ids_for_subwords=segment_ids_for_subwords.to(self.device), + character_pos_to_subword_pos=character_pos_to_subword_pos.to(self.device), + ) + + # fragment_indices.shape=[batsh_size, num_fragments, 3], where last dimension is [start, end, label], where label is candidate id from 1 to 10 + # Next we want to convert predictions for separate letters to probabilities for each whole fragment from fragment_indices. + # To achieve this we first sum the letter logits in each fragment and divide by its length. + # (We use .cumsum and then difference between end and start to get sum per fragment). + # Then we convert logits to probs with softmax and for each fragment extract only the prob for given label. + # Finally we get a list of tuples (start, end, label, prob) + indices_len = fragment_indices.shape[1] + # this padding adds a row of zeros (size=num_labels) as first element of sequence in second dimension. This is needed for cumsum operations. + padded_logits = torch.nn.functional.pad(tag_logits, pad=(0, 0, 1, 0)) + ( + batch_size, + seq_len, + num_labels, + ) = padded_logits.shape # seq_len is +1 compared to that of tag_logits, because of padding + # cumsum.shape=[batch_size, seq_len, num_labels] + cumsum = padded_logits.cumsum(dim=1) + # the size -1 is inferred from other dimensions. We get rid of batch dimension. + cumsum_view = cumsum.view(-1, num_labels) + word_index = ( + torch.ones((batch_size, indices_len), dtype=torch.long) + * torch.arange(batch_size).reshape((-1, 1)) + * seq_len + ).view(-1) + lower_index = (fragment_indices[..., 0]).view(-1) + word_index + higher_index = (fragment_indices[..., 1]).view(-1) + word_index + d_index = (higher_index - lower_index).reshape((-1, 1)).to(self.device) # word lengths + dlog = cumsum_view[higher_index, :] - cumsum_view[lower_index, :] # sum of logits + # word_logits.shape=[batch_size, indices_len, num_labels] + word_logits = (dlog / d_index.float()).view(batch_size, indices_len, num_labels) + # convert logits to probs, same shape + word_probs = torch.nn.functional.softmax(word_logits, dim=-1).to(self.device) + # candidate_index.shape=[batch_size, indices_len] + candidate_index = fragment_indices[:, :, 2].to(self.device) + # candidate_probs.shape=[batch_size, indices_len] + candidate_probs = torch.take_along_dim(word_probs, candidate_index.unsqueeze(2), dim=-1).squeeze(2) + for i in range(batch_size): + possible_replacements = [] + for j in range(indices_len): + start, end, candidate_id = ( + int(fragment_indices[i][j][0]), + int(fragment_indices[i][j][1]), + int(fragment_indices[i][j][2]), + ) + if candidate_id == 0: # this is padding + continue + prob = round(float(candidate_probs[i][j]), 5) + if prob < 0.01: + continue + # -1 because in the output file we will not have a [CLS] token + possible_replacements.append( + str(start - 1) + " " + str(end - 1) + " " + str(candidate_id) + " " + str(prob) + ) + all_possible_replacements.append(possible_replacements) + + # torch.argmax(tag_logits, dim=-1) gives a tensor of best predicted labels with shape [batch_size, char_seq_len], .dtype = int64 + # character_preds is list of lists of predicted labels + character_preds = tensor2list(torch.argmax(tag_logits, dim=-1)) + all_tag_preds.extend(character_preds) + + if len(all_possible_replacements) != len(all_tag_preds) or len(all_possible_replacements) != len( + infer_datalayer.dataset.examples + ): + raise IndexError( + "number of sentences mismatch: len(all_possible_replacements)=" + + str(len(all_possible_replacements)) + + "; len(all_tag_preds)=" + + str(len(all_tag_preds)) + + "; len(infer_datalayer.dataset.examples)=" + + str(len(infer_datalayer.dataset.examples)) + ) + # save results to file + with open(output_name, "w", encoding="utf-8") as out: + for i in range(len(infer_datalayer.dataset.examples)): + hyp, ref = infer_datalayer.dataset.hyps_refs[i] + num_letters = hyp.count(" ") + 1 + tag_pred_str = " ".join(list(map(str, all_tag_preds[i][1 : (num_letters + 1)]))) + possible_replacements_str = ";".join(all_possible_replacements[i]) + out.write(hyp + "\t" + ref + "\t" + possible_replacements_str + "\t" + tag_pred_str + "\n") + + except Exception as e: + raise ValueError("Error processing file " + input_name) + + finally: + # set mode back to its original value + self.train(mode=mode) + logging.set_verbosity(logging_level) + + # Functions for processing data + def setup_training_data(self, train_data_config: Optional[DictConfig]): + if not train_data_config or not train_data_config.data_path: + logging.info( + f"Dataloader config or file_path for the train is missing, so no data loader for train is created!" + ) + self._train_dl = None + return + self._train_dl = self._setup_dataloader_from_config(cfg=train_data_config, data_split="train") + + def setup_validation_data(self, val_data_config: Optional[DictConfig]): + if not val_data_config or not val_data_config.data_path: + logging.info( + f"Dataloader config or file_path for the validation is missing, so no data loader for validation is created!" + ) + self._validation_dl = None + return + self._validation_dl = self._setup_dataloader_from_config(cfg=val_data_config, data_split="val") + + def setup_test_data(self, test_data_config: Optional[DictConfig]): + if not test_data_config or test_data_config.data_path is None: + logging.info( + f"Dataloader config or file_path for the test is missing, so no data loader for test is created!" + ) + self._test_dl = None + return + self._test_dl = self._setup_dataloader_from_config(cfg=test_data_config, data_split="test") + + def _setup_dataloader_from_config(self, cfg: DictConfig, data_split: str): + start_time = perf_counter() + logging.info(f'Creating {data_split} dataset') + if cfg.get("use_tarred_dataset", False): + dataset = TarredSpellcheckingAsrCustomizationDataset( + cfg.data_path, + shuffle_n=cfg.get("tar_shuffle_n", 100), + global_rank=self.global_rank, + world_size=self.world_size, + pad_token_id=self.builder._pad_id, + ) + else: + input_file = cfg.data_path + dataset = SpellcheckingAsrCustomizationDataset(input_file=input_file, example_builder=self.builder) + dl = torch.utils.data.DataLoader( + dataset=dataset, batch_size=cfg.batch_size, shuffle=cfg.shuffle, collate_fn=dataset.collate_fn + ) + running_time = perf_counter() - start_time + logging.info(f'Took {running_time} seconds') + return dl + + def _setup_infer_dataloader(self, cfg: DictConfig, input_name: str) -> 'torch.utils.data.DataLoader': + """ + Setup function for a infer data loader. + Args: + cfg: config dictionary containing data loader params like batch_size, num_workers and pin_memory + input_name: path to input file. + Returns: + A pytorch DataLoader. + """ + dataset = SpellcheckingAsrCustomizationTestDataset(input_name, example_builder=self.builder) + return torch.utils.data.DataLoader( + dataset=dataset, + batch_size=cfg["batch_size"], + shuffle=False, + num_workers=cfg.get("num_workers", 0), + pin_memory=cfg.get("pin_memory", False), + drop_last=False, + collate_fn=dataset.collate_fn, + ) + + @classmethod + def list_available_models(cls) -> Optional[PretrainedModelInfo]: + return None diff --git a/scripts/dataset_processing/spoken_wikipedia/run.sh b/scripts/dataset_processing/spoken_wikipedia/run.sh index 2894eb1dc55e..5ae447c9a1a4 100644 --- a/scripts/dataset_processing/spoken_wikipedia/run.sh +++ b/scripts/dataset_processing/spoken_wikipedia/run.sh @@ -102,7 +102,7 @@ ${NEMO_PATH}/tools/ctc_segmentation/run_segmentation.sh \ --MODEL_NAME_OR_PATH=${MODEL_FOR_SEGMENTATION} \ --DATA_DIR=${INPUT_DIR}_prepared \ --OUTPUT_DIR=${OUTPUT_DIR} \ ---MIN_SCORE=${MIN_SCORE} +--MIN_SCORE=${THRESHOLD} # Thresholds for filtering CER_THRESHOLD=20 diff --git a/tests/collections/nlp/test_spellchecking_asr_customization.py b/tests/collections/nlp/test_spellchecking_asr_customization.py new file mode 100644 index 000000000000..8e4d6e9a7b8f --- /dev/null +++ b/tests/collections/nlp/test_spellchecking_asr_customization.py @@ -0,0 +1,1102 @@ +# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest +from transformers import AutoTokenizer + +from nemo.collections.nlp.data.spellchecking_asr_customization.bert_example import BertExampleBuilder +from nemo.collections.nlp.data.spellchecking_asr_customization.utils import ( + apply_replacements_to_text, + substitute_replacements_in_text, +) + + +@pytest.mark.unit +def test_substitute_replacements_in_text(): + text = "we began the further diversification of our revenue base with the protterra supply agreement and the navastar joint development agreement" + replacements = [(66, 75, 'pro-terra', 0.99986), (101, 109, 'navistar', 0.996)] + gold_text = "we began the further diversification of our revenue base with the pro-terra supply agreement and the navistar joint development agreement" + corrected_text = substitute_replacements_in_text(text, replacements, replace_hyphen_to_space=False) + assert corrected_text == gold_text + + gold_text_no_hyphen = "we began the further diversification of our revenue base with the pro terra supply agreement and the navistar joint development agreement" + corrected_text = substitute_replacements_in_text(text, replacements, replace_hyphen_to_space=True) + assert corrected_text == gold_text_no_hyphen + + +@pytest.mark.unit +def test_apply_replacements_to_text(): + + # min_prob = 0.5 + # dp_data = None, + # min_dp_score_per_symbol: float = -99.9 + + # test more than one fragment to replace, test multiple same replacements + text = "we began the further diversification of our revenue base with the protterra supply agreement and the navastar joint development agreement" + replacements = [ + (66, 75, 'proterra', 0.99986), + (66, 75, 'proterra', 0.9956), + (101, 109, 'navistar', 0.93), + (101, 109, 'navistar', 0.91), + (101, 109, 'navistar', 0.92), + ] + gold_text = "we began the further diversification of our revenue base with the proterra supply agreement and the navistar joint development agreement" + corrected_text = apply_replacements_to_text( + text, replacements, min_prob=0.5, replace_hyphen_to_space=False, dp_data=None + ) + assert corrected_text == gold_text + + # test that min_prob works + gold_text = "we began the further diversification of our revenue base with the proterra supply agreement and the navastar joint development agreement" + corrected_text = apply_replacements_to_text( + text, replacements, min_prob=0.95, replace_hyphen_to_space=False, dp_data=None + ) + assert corrected_text == gold_text + + +@pytest.fixture() +def bert_example_builder(): + tokenizer = AutoTokenizer.from_pretrained("huawei-noah/TinyBERT_General_6L_768D") + label_map = {"0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10} + semiotic_classes = {"PLAIN": 0, "CUSTOM": 1} + max_seq_len = 256 + builder = BertExampleBuilder(label_map, semiotic_classes, tokenizer, max_seq_len) + return builder + + +@pytest.mark.skip("Doesn't work download when testing on github, for unknown reason") +@pytest.mark.with_downloads +@pytest.mark.unit +def test_creation(bert_example_builder): + assert bert_example_builder._tokenizer is not None + + +@pytest.mark.skip("Doesn't work download when testing on github, for unknown reason") +@pytest.mark.with_downloads +@pytest.mark.unit +def test_builder_get_spans(bert_example_builder): + span_info_parts = ["CUSTOM 37 41", "CUSTOM 47 52", "CUSTOM 42 46", "CUSTOM 0 7"] + gold_sorted_spans = [(1, 1, 8), (1, 38, 42), (1, 43, 47), (1, 48, 53)] + spans = bert_example_builder._get_spans(span_info_parts) + spans.sort() + assert spans == gold_sorted_spans + + +@pytest.mark.skip("Doesn't work download when testing on github, for unknown reason") +@pytest.mark.with_downloads +@pytest.mark.unit +def test_builder_get_fragment_indices(bert_example_builder): + hyp = "a b o u t _ o u r _ s h i p e r s _ b u t _ y o u _ k n o w" + targets = [1] + # a b o u t _ o u r _ s h i p e r s _ b u t _ y o u _ k n o w + # 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 + span_info_parts = ["CUSTOM 8 17"] + gold_sorted_fragment_indices = [(7, 18, 1), (11, 18, 1)] + fragment_indices = bert_example_builder._get_fragment_indices(hyp, targets, span_info_parts) + fragment_indices.sort() + assert fragment_indices == gold_sorted_fragment_indices + + # a b o u t _ o u r _ s h i p e r s _ b u t _ y o u _ k n o w + # 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + span_info_parts = ["CUSTOM 10 16"] + gold_sorted_fragment_indices = [(11, 18, 1)] + fragment_indices = bert_example_builder._get_fragment_indices(hyp, targets, span_info_parts) + fragment_indices.sort() + assert fragment_indices == gold_sorted_fragment_indices + + +@pytest.mark.skip("Doesn't work download when testing on github, for unknown reason") +@pytest.mark.with_downloads +@pytest.mark.unit +def test_builder_get_input_features(bert_example_builder): + hyp = "a s t r o n o m e r s _ d i d i e _ s o m o n _ a n d _ t r i s t i a n _ g l l o" + ref = "d i d i e r _ s a u m o n;a s t r o n o m i e;t r i s t a n _ g u i l l o t;t r i s t e s s e;m o n a d e;c h r i s t i a n;a s t r o n o m e r;s o l o m o n;d i d i d i d i d i;m e r c y" + targets = [1, 3] + span_info_parts = ["CUSTOM 12 23", "CUSTOM 28 41"] + + gold_tags = [ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 0, + 0, + 0, + 0, + 0, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + ] + gold_input_ids = [ + 101, + 1037, + 1055, + 1056, + 1054, + 1051, + 1050, + 1051, + 1049, + 1041, + 1054, + 1055, + 1035, + 1040, + 1045, + 1040, + 1045, + 1041, + 1035, + 1055, + 1051, + 1049, + 1051, + 1050, + 1035, + 1037, + 1050, + 1040, + 1035, + 1056, + 1054, + 1045, + 1055, + 1056, + 1045, + 1037, + 1050, + 1035, + 1043, + 1048, + 1048, + 1051, + 102, + 1040, + 1045, + 1040, + 1045, + 1041, + 1054, + 1035, + 1055, + 1037, + 1057, + 1049, + 1051, + 1050, + 102, + 1037, + 1055, + 1056, + 1054, + 1051, + 1050, + 1051, + 1049, + 1045, + 1041, + 102, + 1056, + 1054, + 1045, + 1055, + 1056, + 1037, + 1050, + 1035, + 1043, + 1057, + 1045, + 1048, + 1048, + 1051, + 1056, + 102, + 1056, + 1054, + 1045, + 1055, + 1056, + 1041, + 1055, + 1055, + 1041, + 102, + 1049, + 1051, + 1050, + 1037, + 1040, + 1041, + 102, + 1039, + 1044, + 1054, + 1045, + 1055, + 1056, + 1045, + 1037, + 1050, + 102, + 1037, + 1055, + 1056, + 1054, + 1051, + 1050, + 1051, + 1049, + 1041, + 1054, + 102, + 1055, + 1051, + 1048, + 1051, + 1049, + 1051, + 1050, + 102, + 1040, + 1045, + 1040, + 1045, + 1040, + 1045, + 1040, + 1045, + 1040, + 1045, + 102, + 1049, + 1041, + 1054, + 1039, + 1061, + 102, + ] + gold_input_mask = [ + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + ] + gold_segment_ids = [ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 2, + 2, + 2, + 2, + 2, + 2, + 2, + 2, + 2, + 2, + 2, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 4, + 4, + 4, + 4, + 4, + 4, + 4, + 4, + 4, + 4, + 5, + 5, + 5, + 5, + 5, + 5, + 5, + 6, + 6, + 6, + 6, + 6, + 6, + 6, + 6, + 6, + 6, + 7, + 7, + 7, + 7, + 7, + 7, + 7, + 7, + 7, + 7, + 7, + 8, + 8, + 8, + 8, + 8, + 8, + 8, + 8, + 9, + 9, + 9, + 9, + 9, + 9, + 9, + 9, + 9, + 9, + 9, + 10, + 10, + 10, + 10, + 10, + 10, + ] + gold_labels_mask = [ + 0, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + ] + gold_input_ids_for_subwords = [ + 101, + 26357, + 2106, + 2666, + 2061, + 8202, + 1998, + 13012, + 16643, + 2319, + 1043, + 7174, + 102, + 2106, + 3771, + 7842, + 2819, + 2239, + 102, + 28625, + 3630, + 9856, + 102, + 9822, + 26458, + 7174, + 2102, + 102, + 13012, + 13473, + 11393, + 102, + 13813, + 3207, + 102, + 3017, + 102, + 15211, + 102, + 9168, + 102, + 2106, + 28173, + 4305, + 4305, + 102, + 8673, + 102, + ] + gold_input_mask_for_subwords = [ + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + ] + gold_segment_ids_for_subwords = [ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 1, + 1, + 1, + 1, + 1, + 1, + 2, + 2, + 2, + 2, + 3, + 3, + 3, + 3, + 3, + 4, + 4, + 4, + 4, + 5, + 5, + 5, + 6, + 6, + 7, + 7, + 8, + 8, + 9, + 9, + 9, + 9, + 9, + 10, + 10, + ] + gold_character_pos_to_subword_pos = [ + 0, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 1, + 2, + 2, + 2, + 3, + 3, + 3, + 4, + 4, + 5, + 5, + 5, + 5, + 6, + 6, + 6, + 6, + 7, + 7, + 7, + 8, + 8, + 8, + 9, + 9, + 9, + 10, + 11, + 11, + 11, + 12, + 13, + 13, + 13, + 14, + 14, + 14, + 14, + 15, + 15, + 16, + 16, + 17, + 17, + 18, + 19, + 19, + 19, + 19, + 19, + 20, + 20, + 21, + 21, + 21, + 22, + 23, + 23, + 23, + 23, + 23, + 23, + 23, + 23, + 24, + 24, + 24, + 25, + 25, + 25, + 26, + 27, + 28, + 28, + 28, + 29, + 29, + 29, + 30, + 30, + 30, + 31, + 32, + 32, + 32, + 32, + 33, + 33, + 34, + 35, + 35, + 35, + 35, + 35, + 35, + 35, + 35, + 35, + 36, + 37, + 37, + 37, + 37, + 37, + 37, + 37, + 37, + 37, + 37, + 38, + 39, + 39, + 39, + 39, + 39, + 39, + 39, + 40, + 41, + 41, + 41, + 42, + 42, + 42, + 43, + 43, + 44, + 44, + 45, + 46, + 46, + 46, + 46, + 46, + 47, + ] + + tags = [0 for _ in hyp.split()] + for p, t in zip(span_info_parts, targets): + c, start, end = p.split(" ") + start = int(start) + end = int(end) + tags[start:end] = [t for i in range(end - start)] + + # get input features for characters + (input_ids, input_mask, segment_ids, labels_mask, labels, _, _,) = bert_example_builder._get_input_features( + hyp=hyp, ref=ref, tags=tags + ) + + # get input features for words + hyp_with_words = hyp.replace(" ", "").replace("_", " ") + ref_with_words = ref.replace(" ", "").replace("_", " ") + ( + input_ids_for_subwords, + input_mask_for_subwords, + segment_ids_for_subwords, + _, + _, + _, + _, + ) = bert_example_builder._get_input_features(hyp=hyp_with_words, ref=ref_with_words, tags=None) + + character_pos_to_subword_pos = bert_example_builder._map_characters_to_subwords(input_ids, input_ids_for_subwords) + + assert tags == gold_tags + assert input_ids == gold_input_ids + assert input_mask == gold_input_mask + assert segment_ids == gold_segment_ids + assert labels_mask == gold_labels_mask + assert input_ids_for_subwords == gold_input_ids_for_subwords + assert input_mask_for_subwords == gold_input_mask_for_subwords + assert segment_ids_for_subwords == gold_segment_ids_for_subwords + assert character_pos_to_subword_pos == gold_character_pos_to_subword_pos diff --git a/tools/ctc_segmentation/scripts/prepare_data.py b/tools/ctc_segmentation/scripts/prepare_data.py index 429b642d5ba0..c6ea024273fb 100644 --- a/tools/ctc_segmentation/scripts/prepare_data.py +++ b/tools/ctc_segmentation/scripts/prepare_data.py @@ -151,7 +151,7 @@ def split_text( ) # end of quoted speech - to be able to split sentences by full stop - transcript = re.sub(r"([\.\?\!])([\"\'])", r"\g<2>\g<1> ", transcript) + transcript = re.sub(r"([\.\?\!])([\"\'”])", r"\g<2>\g<1> ", transcript) # remove extra space transcript = re.sub(r" +", " ", transcript) diff --git a/tutorials/nlp/SpellMapper_English_ASR_Customization.ipynb b/tutorials/nlp/SpellMapper_English_ASR_Customization.ipynb new file mode 100644 index 000000000000..189ac958d377 --- /dev/null +++ b/tutorials/nlp/SpellMapper_English_ASR_Customization.ipynb @@ -0,0 +1,1403 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "PiRuohn_FQco" + }, + "source": [ + "# Overview\n", + "This tutorial demonstrates how to run inference with SpellMapper - a model for Spellchecking ASR (Automatic Speech Recognition) Customization.\n", + "\n", + "Estimated time: 10-15 min.\n", + "\n", + "SpellMapper is a non-autoregressive (NAR) model based on transformer architecture ([BERT](https://arxiv.org/pdf/1810.04805.pdf) with multiple separators).\n", + "It gets as input a single ASR hypothesis (text) and a **custom vocabulary** and predicts which fragments in the ASR hypothesis should be replaced by which custom words/phrases if any.\n", + "\n", + "This model is an alternative to word boosting/shallow fusion approaches:\n", + " - does not require retraining ASR model;\n", + " - does not require beam-search/language model(LM);\n", + " - can be applied on top of any English ASR model output;" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qm5wmxVEGXgH" + }, + "source": [ + "## What is custom vocabulary?\n", + "**Custom vocabulary** is a list of words/phrases that are important for a particular user. For example, user's contact names, playlist, selected terminology and so on. The size of the custom vocabulary can vary from several hundreds to **several thousand entries** - but this is not an equivalent to ngram language model.\n", + "\n", + "![Scope of customization with user vocabulary](images/spellmapper_customization_vocabulary.png)\n", + "\n", + "Note that unlike traditional spellchecking approaches, which aim to correct known words using language models, the goal of contextual spelling correction is to correct highly specific user terms, most of which can be 1) out-of-vocabulary (OOV) words, 2) spelling variations (e.g., \"John Koehn\", \"Jon Cohen\") and language models cannot help much with that." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "D5_XwuXDOKho" + }, + "source": [ + "## Tutorial Plan\n", + "\n", + "1. Create a sample custom vocabulary using some medical terminology.\n", + "2. Study what customization does - a detailed analysis of a small example.\n", + "3. Run a bigger example:\n", + " * Create sample ASR results by running TTS (text-to-speech synthesis) + ASR on some medical paper abstracts.\n", + " * Run SpellMapper inference and show how it can improve ASR results using custom vocabulary.\n", + "\n", + "TL;DR We reduce WER from `14.3%` to `11.4%` by correcting medical terms, e.g.\n", + "* `puramesin` => `puromycin`\n", + "* `parromsin` => `puromycin`\n", + "* `and hydrod` => `anhydride`\n", + "* `lesh night and` => `lesch-nyhan`\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "agz8B2CxXBBG" + }, + "source": [ + "# Preparation" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "koRPpYISNPuH" + }, + "source": [ + "## Installing NeMo" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "HCnnz3cgVc4Q" + }, + "outputs": [], + "source": [ + "# Install NeMo library. If you are running locally (rather than on Google Colab), comment out the below lines\n", + "# and instead follow the instructions at https://github.com/NVIDIA/NeMo#Installation\n", + "GITHUB_ACCOUNT = \"bene-ges\"\n", + "BRANCH = \"spellchecking_asr_customization_double_bert\"\n", + "!python -m pip install git+https://github.com/{GITHUB_ACCOUNT}/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]\n", + "\n", + "# Download local version of NeMo scripts. If you are running locally and want to use your own local NeMo code,\n", + "# comment out the below lines and set NEMO_DIR to your local path.\n", + "NEMO_DIR = 'nemo'\n", + "!git clone -b {BRANCH} https://github.com/{GITHUB_ACCOUNT}/NeMo.git $NEMO_DIR" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_M92gCn_NW1_" + }, + "source": [ + "## Additional installs\n", + "We will use `sentence_splitter` to split abstracts to sentences." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ddyJA3NtGl9C" + }, + "outputs": [], + "source": [ + "!pip install sentence_splitter" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qVa91rGkeFje" + }, + "source": [ + "Clone the SpellMapper model from HuggingFace.\n", + "Note that we will need not only the checkpoint itself, but also the ngram mapping vocabulary `replacement_vocab_filt.txt` from the same folder." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "JiI9dkEm5cpW" + }, + "outputs": [], + "source": [ + "!git clone https://huggingface.co/bene-ges/spellmapper_asr_customization_en" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8saqFOePVfFf" + }, + "source": [ + "## Imports\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "tAJyiYn_VnrF" + }, + "outputs": [], + "source": [ + "import IPython.display as ipd\n", + "import json\n", + "import random\n", + "import re\n", + "import soundfile as sf\n", + "import torch\n", + "\n", + "from collections import Counter, defaultdict\n", + "from difflib import SequenceMatcher\n", + "from matplotlib.pyplot import imshow\n", + "from matplotlib import pyplot as plt\n", + "from sentence_splitter import SentenceSplitter\n", + "from typing import List, Set, Tuple\n", + "\n", + "from nemo.collections.tts.models import FastPitchModel\n", + "from nemo.collections.tts.models import HifiGanModel\n", + "\n", + "from nemo.collections.asr.parts.utils.manifest_utils import read_manifest\n", + "\n", + "from nemo.collections.nlp.data.spellchecking_asr_customization.utils import (\n", + " get_all_candidates_coverage,\n", + " get_index,\n", + " load_ngram_mappings,\n", + " search_in_index,\n", + " get_candidates,\n", + " read_spellmapper_predictions,\n", + " apply_replacements_to_text,\n", + " load_ngram_mappings_for_dp,\n", + " get_alignment_by_dp,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mfAaOdAWUGUV" + }, + "source": [ + "Use seed to get a reproducible behaviour." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "UlGnNKTuT_6A" + }, + "outputs": [], + "source": [ + "random.seed(0)\n", + "torch.manual_seed(0)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "RPPHI7Zd_fDz" + }, + "source": [ + "## Download data\n", + "\n", + "File `pubmed23n0009.xml` taken from public ftp server of https://www.ncbi.nlm.nih.gov/pmc/ contains information about 5593 medical papers, from which we extract only their abstracts. We will feed sentences from there to TTS + ASR to get initial ASR results.\n", + "\n", + "File `wordlist.txt` contains 100k **single-word** medical terms.\n", + "\n", + "File `valid_adam.txt` contains 24k medical abbreviations with their full forms. We will use those full forms as examples of **multi-word** medical terms.\n", + "\n", + "File `count_1w.txt` contains 330k single words with their frequencies from Google Ngrams corpus. We will use this file to filter out frequent words from our custom vocabulary.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "mX6cvE8xw2n1" + }, + "outputs": [], + "source": [ + "!wget https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed23n0009.xml.gz\n", + "!gunzip pubmed23n0009.xml.gz\n", + "!grep \"AbstractText\" pubmed23n0009.xml > abstract.txt\n", + "\n", + "!wget https://raw.githubusercontent.com/McGill-NLP/medal/master/toy_data/valid_adam.txt\n", + "!wget https://raw.githubusercontent.com/glutanimate/wordlist-medicalterms-en/master/wordlist.txt\n", + "!wget https://norvig.com/ngrams/count_1w.txt" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mBm9BeqNaRlC" + }, + "source": [ + "## Auxiliary functions\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "kVUKhSh48Ypi" + }, + "outputs": [], + "source": [ + "CHARS_TO_IGNORE_REGEX = re.compile(r\"[\\.\\,\\?\\:!;()«»…\\]\\[/\\*–‽+&_\\\\½√>€™$•¼}{~—=“\\\"”″‟„]\")\n", + "\n", + "\n", + "def get_medical_vocabulary() -> Tuple[Set[str], Set[str]]:\n", + " \"\"\"This function builds a vocabulary of medical terms using downloaded sources:\n", + " wordlist.txt - 100k single-word medical terms.\n", + " valid_adam.txt - 24k medical abbreviations with their full forms. We use those full forms as examples of multi-word medical terms.\n", + " count_1w.txt - 330k single words with their frequencies from Google Ngrams corpus. We will use this file to filter out frequent words from our custom vocabulary.\n", + " \"\"\"\n", + " common_words = set()\n", + " with open(\"count_1w.txt\", \"r\", encoding=\"utf-8\") as f:\n", + " for line in f:\n", + " word, freq = line.strip().casefold().split(\"\\t\")\n", + " if int(freq) < 500000:\n", + " break\n", + " common_words.add(word)\n", + " print(\"Size of common words vocabulary:\", len(common_words))\n", + "\n", + " abbreviations = defaultdict(set)\n", + " medical_vocabulary = set()\n", + " with open(\"valid_adam.txt\", \"r\", encoding=\"utf-8\") as f:\n", + " lines = f.readlines()\n", + " # first line is header\n", + " for line in lines[1:]:\n", + " abbrev, _, phrase = line.strip().split(\"\\t\")\n", + " # skip phrases longer than 3 words because some of them are long explanations\n", + " if phrase.count(\" \") > 2:\n", + " continue\n", + " if phrase in common_words:\n", + " continue\n", + " medical_vocabulary.add(phrase)\n", + " abbrev = abbrev.lower()\n", + " abbreviations[abbrev].add(phrase)\n", + "\n", + " with open(\"wordlist.txt\", \"r\", encoding=\"utf-8\") as f:\n", + " for line in f:\n", + " word = line.strip().casefold()\n", + " # skip words contaning digits\n", + " if re.match(r\".*\\d.*\", word):\n", + " continue\n", + " if re.match(r\".*[\\[\\]\\(\\)\\+\\,\\.].*\", word):\n", + " continue\n", + " if word in common_words:\n", + " continue\n", + " medical_vocabulary.add(word)\n", + "\n", + " print(\"Size of medical vocabulary:\", len(medical_vocabulary))\n", + " print(\"Size of abbreviation vocabulary:\", len(abbreviations))\n", + " return medical_vocabulary, abbreviations\n", + "\n", + "\n", + "def read_abstracts(medical_vocabulary: Set[str]) -> Tuple[List[str], Set[str], Set[str]]:\n", + " \"\"\"This function reads the downloaded medical abstracts, and extracts sentences containing any word/phrase from the medical vocabulary.\n", + " Args:\n", + " medical_vocabulary: set of known medical words or phrases\n", + " Returns:\n", + " sentences: list of extracted sentences\n", + " all_found_singleword: set of single words from medical vocabulary that occurred at least in one sentence\n", + " all_found_multiword: set of multi-word phrases from medical vocabulary that occurred at least in one sentence\n", + " \"\"\"\n", + " splitter = SentenceSplitter(language='en')\n", + "\n", + " all_sentences = []\n", + " all_found_singleword = set()\n", + " all_found_multiword = set()\n", + " with open(\"abstract.txt\", \"r\", encoding=\"utf-8\") as f:\n", + " for line in f:\n", + " text = line.strip().replace(\"\", \"\").replace(\"\", \"\")\n", + " sents = splitter.split(text)\n", + " found_singleword = set()\n", + " found_multiword = set()\n", + " for sent in sents:\n", + " # remove anything in brackets from text\n", + " sent = re.sub(r\"\\(.+\\)\", r\"\", sent)\n", + " # remove quotes from text\n", + " sent = sent.replace(\"\\\"\", \"\")\n", + " # skip sentences contaning digits because normalization is out of scope of this tutorial\n", + " if re.match(r\".*\\d.*\", sent):\n", + " continue\n", + " # skip sentences contaning abbreviations with period inside the sentence (for the same reason)\n", + " if \". \" in sent:\n", + " continue\n", + " # skip long sentences as they may cause OOM issues\n", + " if len(sent) > 150:\n", + " continue\n", + " # replace all punctuation to space and convert to lowercase\n", + " sent_clean = CHARS_TO_IGNORE_REGEX.sub(\" \", sent).lower()\n", + " sent_clean = \" \".join(sent_clean.split(\" \"))\n", + " words = sent_clean.split(\" \")\n", + "\n", + " found_phrases = set()\n", + " for begin in range(len(words)):\n", + " for end in range(begin + 1, min(begin + 4, len(words))):\n", + " phrase = \" \".join(words[begin:end])\n", + " if phrase in medical_vocabulary:\n", + " found_phrases.add(phrase)\n", + " if end - begin == 1:\n", + " found_singleword.add(phrase)\n", + " else:\n", + " found_multiword.add(phrase)\n", + " if len(found_phrases) > 0:\n", + " all_sentences.append((sent, \";\".join(found_phrases)))\n", + " all_found_singleword = all_found_singleword.union(found_singleword)\n", + " all_found_multiword = all_found_multiword.union(found_multiword)\n", + "\n", + " print(\"Sentences:\", len(all_sentences))\n", + " print(\"Unique single-word terms found:\", len(all_found_singleword))\n", + " print(\"Unique multi-word terms found:\", len(all_found_multiword))\n", + " print(\"Examples of multi-word terms\", str(list(all_found_multiword)[0:10]))\n", + " \n", + " return all_sentences, all_found_singleword, all_found_multiword" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "XU3xeCBVpWOL" + }, + "outputs": [], + "source": [ + "def get_fragments(i_words: List[str], j_words: List[str]) -> List[Tuple[str, str, str, int, int, int, int]]:\n", + " \"\"\"This function is used to compare two word sequences to find minimal fragments that differ.\n", + " Args:\n", + " i_words: list of words in first sequence\n", + " j_words: list of words in second sequence\n", + " Returns:\n", + " list of tuples (difference_type, fragment1, fragment2, begin_of_fragment1, end_of_fragment1, begin_of_fragment2, end_of_fragment2)\n", + " \"\"\"\n", + " s = SequenceMatcher(None, i_words, j_words)\n", + " result = []\n", + " for tag, i1, i2, j1, j2 in s.get_opcodes():\n", + " result.append((tag, \" \".join(i_words[i1:i2]), \" \".join(j_words[j1:j2]), i1, i2, j1, j2))\n", + " result = sorted(result, key=lambda x: x[3])\n", + " return result" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2ydXp_pFYmYu" + }, + "source": [ + "## Read medical data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "WAeauax0SV1-" + }, + "outputs": [], + "source": [ + "medical_vocabulary, _ = get_medical_vocabulary()\n", + "sentences, found_singleword, found_multiword = read_abstracts(medical_vocabulary)\n", + "# in case if we need random candidates from a big sample - we will use full medical vocabulary for that purpose.\n", + "big_sample = list(medical_vocabulary)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "FRli7-Kx7sOO" + }, + "outputs": [], + "source": [ + "for sent, phrases in sentences[0:10]:\n", + " print(sent, \"\\t\", phrases)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rL1VqH2_dk93" + }, + "source": [ + "# SpellMapper ASR Customization\n", + "\n", + "SpellMapper model relies on two offline preparation steps:\n", + "1. Collecting n-gram mappings from a large corpus (this mappings vocabulary had been collected once on a large corpus and is supplied with the model).\n", + "2. Indexing of user vocabulary by n-grams.\n", + "\n", + "![Offline data preparation](images/spellmapper_data_preparation.png)\n", + "\n", + "At inference time we take as input an ASR hypothesis and an n-gram-indexed user vocabulary and perform following steps:\n", + "1. Retrieve the top 10 candidate phrases from the user vocabulary that are likely to be contained in the given ASR-hypothesis, possibly in a misspelled form.\n", + "2. Run the neural model that tags the input characters with correct candidate labels or 0 if no match is found.\n", + "3. Do post-processing to combine results.\n", + "\n", + "![Inference pipeline](images/spellmapper_inference_pipeline.png)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "OeJpsMwslmrd" + }, + "source": [ + "## N-gram mappings\n", + "Note that n-gram mappings vocabulary had been collected from a large corpus and is supplied with the model. It is supposed to be \"universal\" for English language.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uH6p0mOd12pi" + }, + "source": [ + "Let's see what n-gram mappings are like, for example, for an n-gram `l u c`.\n", + "Note that n-grams in `replacement_vocab_filt.txt` preserve one-to-one correspondence between original letters and misspelled fragments (this additional markup is handled during loading). \n", + "* `+` means that adjacent letters are concatenated and correspond to a single source letter. \n", + "* `` means that the original letter is deleted. \n", + "This auxiliary markup will be removed automatically during loading.\n", + "\n", + "`_` is used instead of real space symbol.\n", + "\n", + "Last three columns are:\n", + "* joint frequency\n", + "* frequency of original n-gram\n", + "* frequency of misspelled n-gram\n", + "\n", + "$$\\frac{JointFrequency}{SourceFrequency}=TranslationProbability$$\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "qul163dB1sKp" + }, + "outputs": [], + "source": [ + "!awk 'BEGIN {FS=\"\\t\"} ($1==\"l u c\"){print $0}' < spellmapper_asr_customization_en/replacement_vocab_filt.txt | sort -t$'\\t' -k3nr" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "eWxcrVWZ3Pfq" + }, + "source": [ + "Now we read n-gram mappings from the file. Parameter `max_misspelled_freq` controls maximum frequency of misspelled n-grams. N-grams more frequent than that are put in the list of banned n-grams and won't be used in indexing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "WHKhE945-N7o" + }, + "outputs": [], + "source": [ + "print(\"load n-gram mappings...\")\n", + "ngram_mapping_vocab, ban_ngram = load_ngram_mappings(\"spellmapper_asr_customization_en/replacement_vocab_filt.txt\", max_misspelled_freq=125000)\n", + "# CAUTION: entries in ban_ngram end with a space and can contain \"+\" \"=\"\n", + "print(\"Size of ngram mapping vocabulary:\", len(ngram_mapping_vocab))\n", + "print(\"Size of banned ngrams:\", len(ban_ngram))\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "49IcMBfllvXN" + }, + "source": [ + "## Indexing of custom vocabulary" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "b1K6paeee2Iu" + }, + "source": [ + "As we mentioned earlier, this model pipeline is intended to work with custom vocabularies up to several thousand entries. Since the whole medical vocabulary contains 110k entries, we restrict our custom vocabulary to 5000+ terms that occured in given corpus of abstracts.\n", + "\n", + "The goal of indexing our custom vocabulary is to build an index where key is a letter n-gram and value is the whole phrase. The keys are n-grams in the given user phrase and their misspelled variants taken from our collection of n-\n", + "gram mappings (see Index of custom vocabulary in Fig. 1)\n", + "\n", + "*Though it is possible to index and search the whole 110k vocabulary, it will require additional optimizations and is beyond the scope of this tutorial.*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "xWb0jGqw6Woi" + }, + "outputs": [], + "source": [ + "custom_phrases = []\n", + "for phrase in medical_vocabulary:\n", + " if phrase not in found_singleword and phrase not in found_multiword:\n", + " continue\n", + " custom_phrases.append(\" \".join(list(phrase.replace(\" \", \"_\"))))\n", + "print(\"Size of customization vocabulary:\", len(custom_phrases))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "UHWor5pD2Eyb" + }, + "source": [ + "Now we build the index for our custom phrases.\n", + "\n", + "Parameter `min_log_prob` controls minimum log probability, after which we stop growing this n-gram.\n", + "\n", + "Parameter `max_phrases_per_ngram` controls maximum number of phrases that can be indexed by one ngram. N-grams exceeding this limit are also banned and not used in indexing.\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "hs4RDXj0-xW9" + }, + "outputs": [], + "source": [ + "phrases, ngram2phrases = get_index(custom_phrases, ngram_mapping_vocab, ban_ngram, min_log_prob=-4.0, max_phrases_per_ngram=600)\n", + "print(\"Size of phrases:\", len(phrases))\n", + "print(\"Size of ngram2phrases:\", len(ngram2phrases))\n", + "\n", + "# Save index to file - later we will use it in other script\n", + "with open(\"index.txt\", \"w\", encoding=\"utf-8\") as out:\n", + " for ngram in ngram2phrases:\n", + " for phrase_id, begin, size, logprob in ngram2phrases[ngram]:\n", + " phrase = phrases[phrase_id]\n", + " out.write(ngram + \"\\t\" + phrase + \"\\t\" + str(begin) + \"\\t\" + str(size) + \"\\t\" + str(logprob) + \"\\n\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "RV1sdQ9rvar8" + }, + "source": [ + "## Small detailed example\n", + "\n", + "Let's consider, for example, one custom phrase `thoracic aorta` and an incorrect ASR-hypothesis `the tarasic oorda is a part of the aorta located in the thorax`, containing a misspelled phrase `tarasic_oorda`. \n", + "\n", + "We will see \n", + "1. How this custom phrase is indexed.\n", + "2. How candidate retrieval works, given ASR-hypothesis.\n", + "3. How inference and post-processing work.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kGBTTJXixnrG" + }, + "source": [ + "### N-grams in index" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ryfUlqNMl4vQ" + }, + "source": [ + "Let's look, for example, by what n-grams a custom phrase `thoracic aorta` is indexed. \n", + "Columns: \n", + "1. n-gram\n", + "2. beginning position in the phrase\n", + "3. length\n", + "4. log probability\n", + "\n", + "Note that many n-grams are not from n-gram mappings file. Those are derived by growing previous n-grams with new replacements. In this case log probabilities are summed up. Growing stops, when minimum log prob is exceeded.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "x0ZVsXGBo8pt" + }, + "outputs": [], + "source": [ + "for ngram in ngram2phrases:\n", + " for phrase_id, b, length, lprob in ngram2phrases[ngram]:\n", + " if phrases[phrase_id] == \"t h o r a c i c _ a o r t a\":\n", + " print(ngram.ljust(16) + \"\\t\" + str(b).rjust(4) + \"\\t\" + str(length).rjust(4) + \"\\t\" + str(lprob))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "20ov23ze4xeQ" + }, + "source": [ + "### Candidate retrieval\n", + "Candidate retrieval tasks are:\n", + " - Given an input sentence and an index of custom vocabulary find all n-grams from the index matching the sentence. \n", + " - Find which sentence fragments and which custom phrases have most \"hits\" - potential candidates.\n", + " - Find approximate starting position for each candidate phrase. \n", + "\n", + "\n", + "Let's look at the hits, that phrase \"thoracic aorta\" gets by searching all ngrams in the input text. We can see some hits in different part of the sentence, but a moving window can find a fragment with most hits." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "t_rhKQ3Xqa8A" + }, + "outputs": [], + "source": [ + "sent = \"the_tarasic_oorda_is_a_part_of_the_aorta_located_in_the_thorax\"\n", + "phrases2positions, position2ngrams = search_in_index(ngram2phrases, phrases, sent)\n", + "print(\" \".join(list(sent)))\n", + "print(\" \".join(list(map(str, phrases2positions[phrases.index(\"t h o r a c i c _ a o r t a\")].astype(int)))))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "orkRapbjF4aZ" + }, + "source": [ + "`phrases2positions` is a matrix of size (len(phrases), len(ASR_hypothesis)).\n", + "It is filled with 1.0 (hits) on intersection of letter n-grams and phrases that are indexed by these n-grams, 0.0 - elsewhere.\n", + "It is used to find phrases with many hits within a contiguous window - potential matching candidates.\n", + "\n", + "`position2ngrams` is a list of sets of ngrams. List index is the starting position in the ASR-hypothesis.\n", + "It is used later to check how well each found candidate is covered by n-grams (to avoid cases where some repeating n-gram gives many hits to a phrase, but the phrase itself is not well covered)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "JF7u4_iiHLyI" + }, + "outputs": [], + "source": [ + "candidate2coverage, candidate2position = get_all_candidates_coverage(phrases, phrases2positions)\n", + "print(\"Coverage=\", candidate2coverage[phrases.index(\"t h o r a c i c _ a o r t a\")])\n", + "print(\"Starting position=\", candidate2position[phrases.index(\"t h o r a c i c _ a o r t a\")])" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "45mvKg8ZyNbr" + }, + "source": [ + "`candidate2coverage` is a list of size len(phrases) containing coverage (0.0 to 1.0) in best window.\n", + "Coverage is a smoothed percentage of hits in the window of size of the given phrase.\n", + "\n", + "`candidate2position` is a list of size len(phrases) containing starting position of best window.\n", + "\n", + "Starting position is approximate, it's ok. If it is not at the beginning of some word, SpellMapper will try to adjust it later. In this particular example we get 5 as starting position instead of 4, missing the first letter." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Sjyn9I98udL9" + }, + "source": [ + "### Inference\n", + "\n", + "Now let's generate input for SpellMapper inference. \n", + "An input line should consist of 4 tab-separated columns:\n", + " - text of ASR-hypothesis\n", + " - texts of 10 candidates separated by semicolon\n", + " - 1-based ids of non-dummy candidates\n", + " - approximate start/end coordinates of non-dummy candidates (correspond to ids)\n", + "Note that candidate retrieval is done inside the function `get_candidates`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "cJnusVfBRhRX" + }, + "outputs": [], + "source": [ + "out = open(\"spellmapper_input.txt\", \"w\", encoding=\"utf-8\")\n", + "letters = list(sent)\n", + "candidates = get_candidates(ngram2phrases, phrases, letters, big_sample)\n", + "# We add two columns with targets and span_info. \n", + "# They have same format as during training, but start and end positions are APPROXIMATE, they will be adjusted when constructing BertExample.\n", + "targets = []\n", + "span_info = []\n", + "for idx, c in enumerate(candidates):\n", + " if c[1] == -1:\n", + " continue\n", + " targets.append(str(idx + 1)) # targets are 1-based\n", + " start = c[1]\n", + " end = min(c[1] + c[2], len(letters)) # ensure that end is not outside sentence length (it can happen because c[2] is candidate length used as approximation)\n", + " span_info.append(\"CUSTOM \" + str(start) + \" \" + str(end))\n", + "\n", + "out.write(\" \".join(letters) + \"\\t\" + \";\".join([x[0] for x in candidates]) + \"\\t\" + \" \".join(targets) + \"\\t\" + \";\".join(span_info) + \"\\n\")\n", + "out.close()\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Qpei5o89SmaU" + }, + "outputs": [], + "source": [ + "!cat spellmapper_input.txt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "9rAmO15SS6go" + }, + "outputs": [], + "source": [ + "!python nemo/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py \\\n", + " pretrained_model=spellmapper_asr_customization_en/training_10m_5ep.nemo \\\n", + " model.max_sequence_len=512 \\\n", + " inference.from_file=spellmapper_input.txt \\\n", + " inference.out_file=spellmapper_output.txt \\\n", + " inference.batch_size=16 \\\n", + " lang=en\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "wd2aq4T1N5cs" + }, + "source": [ + "Each line in SpellMapper output is tab-separated and consists of 4 columns:\n", + "1. ASR-hypothesis (same as in input)\n", + "2. 10 candidates separated with semicolon (same as in input)\n", + "3. fragment predictions, separated with semicolon, each prediction is a tuple (start, end, candidate_id, probability)\n", + "4. letter predictions - candidate_id predicted for each letter (this is only for debug purposes)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ravgEX8cTFty" + }, + "outputs": [], + "source": [ + "!cat spellmapper_output.txt" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "az26364-PHb2" + }, + "source": [ + "We can use some utility functions to apply found replacements and get actual corrected text." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "lPtFa_EhK8pb" + }, + "outputs": [], + "source": [ + "spellmapper_results = read_spellmapper_predictions(\"spellmapper_output.txt\")\n", + "text, replacements, _ = spellmapper_results[0]\n", + "corrected_text = apply_replacements_to_text(text, replacements, replace_hyphen_to_space=False)\n", + "print(\"Text before correction:\\n\", text)\n", + "print(\"Text after correction:\\n\", corrected_text)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "efF7O-D91FLX" + }, + "source": [ + "# Bigger customization example\n", + "\n", + "Let's test customization on more data. The plan is\n", + " * Get baseline ASR transcriptions by running TTS + ASR on some medical paper abstracts.\n", + " * Run SpellMapper inference and show how it can improve ASR results using custom vocabulary.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "r_EFPnyDcXZt" + }, + "source": [ + "## Run TTS" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "i9F5SBhmr8rk" + }, + "outputs": [], + "source": [ + "# create a folder for wav files (TTS output)\n", + "!rm -r audio\n", + "!mkdir audio" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "JMbkNVt7YBAO" + }, + "outputs": [], + "source": [ + "if torch.cuda.is_available():\n", + " device = \"cuda\"\n", + "else:\n", + " device = \"cpu\"\n", + "\n", + "# Load FastPitch from HuggingFace\n", + "spectrogram_generator = FastPitchModel.from_pretrained(\"nvidia/tts_en_fastpitch\").eval().to(device)\n", + "# Load HifiGan vocoder from HuggingFace\n", + "vocoder = HifiGanModel.from_pretrained(model_name=\"nvidia/tts_hifigan\").eval().to(device)\n", + "\n", + "# Write sentences that we want to feed to TTS\n", + "with open(\"tts_input.txt\", \"w\", encoding=\"utf-8\") as out:\n", + " for sent, _ in sentences[0:100]:\n", + " out.write(sent + \"\\n\")\n", + "\n", + "out_manifest = open(\"manifest.json\", \"w\", encoding=\"utf-8\")\n", + "i = 0\n", + "with open(\"tts_input.txt\", \"r\", encoding=\"utf-8\") as inp:\n", + " for line in inp:\n", + " text = line.strip()\n", + " text_clean = CHARS_TO_IGNORE_REGEX.sub(\" \", text).lower() #replace all punctuation to space and convert to lowercase\n", + " text_clean = \" \".join(text_clean.split())\n", + "\n", + " parsed = spectrogram_generator.parse(text, normalize=True)\n", + "\n", + " spectrogram = spectrogram_generator.generate_spectrogram(tokens=parsed)\n", + " audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram)\n", + "\n", + " # Note that vocoder return a batch of audio. In this example, we just take the first and only sample.\n", + " filename = \"audio/\" + str(i) + \".wav\"\n", + " sf.write(filename, audio.to('cpu').detach().numpy()[0], 16000)\n", + " out_manifest.write(\n", + " \"{\\\"audio_filepath\\\": \\\"\" + filename + \"\\\", \\\"text\\\": \\\"\" + text_clean + \"\\\", \\\"orig_text\\\": \\\"\" + text + \"\\\"}\\n\"\n", + " )\n", + " i += 1\n", + "\n", + " # display some examples\n", + " if i < 10:\n", + " print(f'\"{text}\"\\n')\n", + " ipd.display(ipd.Audio(audio.to('cpu').detach(), rate=22050))\n", + "\n", + "out_manifest.close()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9T3CZcCAmxCz" + }, + "source": [ + "Now we have a folder with generated audios `audio/*.wav` and a nemo manifest with json records like `{\"audio_filepath\": \"audio/0.wav\", \"text\": \"no renal auditory or vestibular toxicity was observed\", \"orig_text\": \"No renal, auditory, or vestibular toxicity was observed.\"}`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "pR_T1HnttVjm" + }, + "outputs": [], + "source": [ + "lines = []\n", + "with open(\"manifest.json\", \"r\", encoding=\"utf-8\") as f:\n", + " lines = f.readlines()\n", + "\n", + "for line in lines:\n", + " try:\n", + " data = json.loads(line.strip())\n", + " except:\n", + " print(line)" + ] + }, + { + "cell_type": "markdown", + "source": [ + "Free GPU memory to avoid OOM." + ], + "metadata": { + "id": "bt2TMLLvdUHm" + } + }, + { + "cell_type": "code", + "source": [ + "del spectrogram_generator\n", + "del vocoder\n", + "torch.cuda.empty_cache()" + ], + "metadata": { + "id": "ZwEpAOCaRH7s" + }, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HrensakWdLkt" + }, + "source": [ + "## Run baseline ASR" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IQNIo2M_mqJc" + }, + "source": [ + "Next we transcribe our .wav files with a general domain [ASR model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_conformer_ctc_large). It will generate an output file `ctc_baseline_transcript.json` where the predicted transcriptions are stored in the field `pred_text` of each record.\n", + "\n", + "Note that this ASR model was not trained or fine-tuned on medical domain, so we expect it to make mistakes on medical terms." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "NMN63ux1mJiG" + }, + "outputs": [], + "source": [ + "!python nemo/examples/asr/transcribe_speech.py \\\n", + " pretrained_name=\"stt_en_conformer_ctc_large\" \\\n", + " dataset_manifest=manifest.json \\\n", + " output_filename=ctc_baseline_transcript_tmp.json \\\n", + " batch_size=2" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "L3swQ8uqqgnp" + }, + "source": [ + "ATTENTION: SpellMapper relies on words to be separated by _single_ space\n", + "\n", + "There is a bug with multiple space, observed in ASR results produced by Conformer-CTC, probably connected to this issue: https://github.com/NVIDIA/NeMo/issues/4034.\n", + "\n", + "So we need to correct the manifests to ensure that all spaces are single." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "z17sxkmXrXpJ" + }, + "outputs": [], + "source": [ + "test_data = read_manifest(\"ctc_baseline_transcript_tmp.json\")\n", + "\n", + "for i in range(len(test_data)):\n", + " # if there are multiple spaces in the string they will be merged to one\n", + " test_data[i][\"pred_text\"] = \" \".join(test_data[i][\"pred_text\"].split())\n", + "\n", + "with open(\"ctc_baseline_transcript.json\", \"w\", encoding=\"utf-8\") as out:\n", + " for d in test_data:\n", + " line = json.dumps(d)\n", + " out.write(line + \"\\n\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "PuKtfhbVkVJY" + }, + "outputs": [], + "source": [ + "!head -n 4 ctc_baseline_transcript.json" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "aCJw9NEXqRg8" + }, + "source": [ + "### Calculating WER of baseline transcript\n", + "We use the standard script from NeMo to calculate WER and CER of our baseline transcript. Internally it compares the text in `pred_text` (predicted transcript) to `text` (reference transcript). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ZmNEGVWQsGo2" + }, + "outputs": [], + "source": [ + "!python nemo/examples/asr/speech_to_text_eval.py \\\n", + " dataset_manifest=ctc_baseline_transcript.json \\\n", + " only_score_manifest=True\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "AvPwJr0ZqdkN" + }, + "source": [ + "### See fragments that differ\n", + "We use SequenceMatcher to see fragments that differ. (Another option is to use a more powerful analytics tool [Speech Data Explorer](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "RAeaVCpMv78y" + }, + "outputs": [], + "source": [ + "test_data = read_manifest(\"ctc_baseline_transcript.json\")\n", + "pred_text = [data['pred_text'] for data in test_data]\n", + "ref_text = [data['text'] for data in test_data]\n", + "audio_filepath = [data['audio_filepath'] for data in test_data]\n", + "\n", + "diff_vocab = Counter()\n", + "\n", + "for i in range(len(test_data)):\n", + " ref_sent = \" \" + ref_text[i] + \" \"\n", + " pred_sent = \" \" + pred_text[i] + \" \"\n", + "\n", + " pred_words = pred_sent.strip().split()\n", + " ref_words = ref_sent.strip().split()\n", + "\n", + " for tag, hyp_fragment, ref_fragment, i1, i2, j1, j2 in get_fragments(pred_words, ref_words):\n", + " if tag != \"equal\":\n", + " diff_vocab[(tag, hyp_fragment, ref_fragment)] += 1\n", + "\n", + "sum_ = 0\n", + "print(\"PRED vs REF\")\n", + "for k, v in diff_vocab.most_common(1000000):\n", + " sum_ += v\n", + " print(k, v, \"sum=\", sum_)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dUSOF7iD1w_9" + }, + "source": [ + "## Run SpellMapper" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "x39BQhYB6_Fr" + }, + "source": [ + "Now we run retrieval on our input manifest and prepare input for SpellMapper inference. Note that we use index of custom vocabulary (file `index.txt` that we saved earlier)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "y8x-yT5WqfFz" + }, + "outputs": [], + "source": [ + "!python nemo/examples/nlp/spellchecking_asr_customization/prepare_input_from_manifest.py \\\n", + " --manifest ctc_baseline_transcript.json \\\n", + " --custom_vocab_index index.txt \\\n", + " --big_sample spellmapper_asr_customization_en/big_sample.txt \\\n", + " --short2full_name short2full.txt \\\n", + " --output_name spellmapper_input.txt" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ueq_JAPWGs_Y" + }, + "source": [ + "Run the inference." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "zgkqiiZtJjcB" + }, + "outputs": [], + "source": [ + "!python nemo/examples/nlp/spellchecking_asr_customization/spellchecking_asr_customization_infer.py \\\n", + " pretrained_model=spellmapper_asr_customization_en/training_10m_5ep.nemo \\\n", + " model.max_sequence_len=512 \\\n", + " inference.from_file=spellmapper_input.txt \\\n", + " inference.out_file=spellmapper_output.txt \\\n", + " inference.batch_size=16 \\\n", + " lang=en\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "RPQWJX8dFLfX" + }, + "source": [ + "Now we postprocess SpellMapper output and create output corrected manifest." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3eFU515yKvXP" + }, + "outputs": [], + "source": [ + "!python nemo/examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py \\\n", + " --input_manifest ctc_baseline_transcript.json \\\n", + " --short2full_name short2full.txt \\\n", + " --output_manifest ctc_corrected_transcript.json \\\n", + " --spellmapper_result spellmapper_output.txt \\\n", + " --replace_hyphen_to_space \\\n", + " --field_name pred_text \\\n", + " --ngram_mappings \"\"\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hRoIhhGh17tp" + }, + "source": [ + "### Calculating WER of corrected transcript." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "qIT957bGo9AY" + }, + "outputs": [], + "source": [ + "!python nemo/examples/asr/speech_to_text_eval.py \\\n", + " dataset_manifest=ctc_corrected_transcript.json \\\n", + " only_score_manifest=True\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "NYXIPusupqOQ" + }, + "outputs": [], + "source": [ + "test_data = read_manifest(\"ctc_corrected_transcript.json\")\n", + "pred_text = [data['pred_text'] for data in test_data]\n", + "ref_text = [data['pred_text_before_correction'] for data in test_data]\n", + "\n", + "diff_vocab = Counter()\n", + "\n", + "for i in range(len(test_data)):\n", + " ref_sent = \" \" + ref_text[i] + \" \"\n", + " pred_sent = \" \" + pred_text[i] + \" \"\n", + "\n", + " pred_words = pred_sent.strip().split()\n", + " ref_words = ref_sent.strip().split()\n", + "\n", + " for tag, hyp_fragment, ref_fragment, i1, i2, j1, j2 in get_fragments(pred_words, ref_words):\n", + " if tag != \"equal\":\n", + " diff_vocab[(tag, hyp_fragment, ref_fragment)] += 1\n", + "\n", + "sum_ = 0\n", + "print(\"Corrected vs baseline\")\n", + "for k, v in diff_vocab.most_common(1000000):\n", + " sum_ += v\n", + " print(k, v, \"sum=\", sum_)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DJtXlqXbTD6M" + }, + "source": [ + "### Filtering by Dynamic Programming(DP) score\n", + "\n", + "What else can be done?\n", + "Given a fragment and its potential replacement, we can apply **dynamic programming** to find the most probable \"translation\" path between them. We will use the same n-gram mapping vocabulary, because its frequencies give us \"translation probability\" of each n-gram pair. The final path score can be calculated as maximum sum of log probalities of matching n-grams along this path.\n", + "Let's look at an example. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "05Qf9wgHU_UR" + }, + "outputs": [], + "source": [ + "joint_vocab, orig_vocab, misspelled_vocab, max_len = load_ngram_mappings_for_dp(\"spellmapper_asr_customization_en/replacement_vocab_filt.txt\")\n", + "\n", + "fragment = \"and hydrod\"\n", + "replacement = \"anhydride\"\n", + "fragment_spaced = \" \".join(list(fragment.replace(\" \", \"_\")))\n", + "replacement_spaced = \" \".join(list(replacement.replace(\" \", \"_\")))\n", + "path = get_alignment_by_dp(\n", + " replacement_spaced,\n", + " fragment_spaced,\n", + " dp_data=(joint_vocab, orig_vocab, misspelled_vocab, max_len)\n", + ")\n", + "print(\"Dynamic Programming path:\")\n", + "for fragment_ngram, replacement_ngram, score, sum_score, joint_freq, orig_freq, misspelled_freq in path:\n", + " print(\n", + " \"\\t\",\n", + " \"frag=\",\n", + " fragment_ngram,\n", + " \"; repl=\",\n", + " replacement_ngram,\n", + " \"; score=\",\n", + " score,\n", + " \"; sum_score=\",\n", + " sum_score,\n", + " \"; joint_freq=\",\n", + " joint_freq,\n", + " \"; orig_freq=\",\n", + " orig_freq,\n", + " \"; misspelled_freq=\",\n", + " misspelled_freq,\n", + " )\n", + "\n", + "print(\"Final path score is in path[-1][3]: \", path[-1][3])\n", + "print(\"Dynamic programming(DP) score per symbol is final score divided by len(fragment): \", path[-1][3] / (len(fragment)))\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hgfKPKckaLnc" + }, + "source": [ + "The idea is that we can skip replacements whose average DP score per symbol is below some predefined minimum, say -1.5.\n", + "Note that dynamic programming works slow because of quadratic complexity, but it allows to get rid of some false positives. Let's apply it on the same test set." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "UhSXh7ht_JRn" + }, + "outputs": [], + "source": [ + "!python nemo/examples/nlp/spellchecking_asr_customization/postprocess_and_update_manifest.py \\\n", + " --input_manifest ctc_baseline_transcript.json \\\n", + " --short2full_name short2full.txt \\\n", + " --output_manifest ctc_corrected_transcript_dp.json \\\n", + " --spellmapper_result spellmapper_output.txt \\\n", + " --replace_hyphen_to_space \\\n", + " --field_name pred_text \\\n", + " --use_dp \\\n", + " --ngram_mappings spellmapper_asr_customization_en/replacement_vocab_filt.txt \\\n", + " --min_dp_score_per_symbol -1.5" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "u8R5YHB3vPC8" + }, + "outputs": [], + "source": [ + "!python nemo/examples/asr/speech_to_text_eval.py \\\n", + " dataset_manifest=ctc_corrected_transcript_dp.json \\\n", + " only_score_manifest=True" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "upvTbkFAeYtR" + }, + "source": [ + "# Final notes\n", + "1. Our paper...\n", + "\n", + "2. To reproduce evaluation experiments from this paper see these scripts:\n", + " - [test_on_kensho.sh](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/evaluation/test_on_kensho.sh)\n", + " - [test_on_userlibri.sh](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/evaluation/test_on_kensho.sh)\n", + " - [test_on_spoken_wikipedia.sh](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/evaluation/test_on_kensho.sh)\n", + "\n", + "3. To reproduce training see [README.md](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/nlp/en_spellmapper/README.md)\n", + "\n", + "4. Promising future research directions would be:\n", + " - add a simple trainable classifier on top of SpellMapper predictions instead of using multiple thresholds\n", + " - retrain with adding more various false positives to the training data" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "toc_visible": true, + "provenance": [] + }, + "gpuClass": "standard", + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/tutorials/nlp/images/spellmapper_customization_vocabulary.png b/tutorials/nlp/images/spellmapper_customization_vocabulary.png new file mode 100644 index 0000000000000000000000000000000000000000..1ecd7ab5add501b7e2889142a4df67442318b161 GIT binary patch literal 39243 zcmd42WmHw&7e0Cb0g+M>kyNC+4j|niT>{eGDczt*cXvxj!=Y2@?oNS2!yyix|Bb%C zalhP;_ug^W7!3D5?mhQfbImp9T=RKWu!5Wf<}-q4AP@*s@}sB{2!y-}0wIt-MFf6v z!N0Wy{y}h1k`M-!4HNAEH&0B3WQ9PW%1HEEePrMs&F-VR0|#4|vaqG`uwQqVzUk6U{= z)BGz>ungr(3n^#Y`xI)&z$29}OIgv+A`my%y==!xm*yiR?s)~wABBQp29+-opDp=R& zE?yG})c8A!RAg2U)6EEngM$NNI^qqDmZ)d{_h#0@Y4)3;WH!^Hq9VBrUN_hx7vsQe zg?>k`EZOfNBtf;yjYM4nLPFc+=ARQg_4G)7sMy};bEaE^$%cl8Y&MG@81eo|YScLF zF^eGKa}UX$vS^?WN#S)n!AQlAjC4)I?5`m4I#sk`0XZV2%elZeWjS<21UQyIWh(l> zhd37vLQ*)aQ!ovO)3~%=|C8nzy&N{?c{ZhnicP}nbq(gZIh~M~PiF0!pHEptz#`_3 z8+ziqd^RC3Ff}=u#^<40syZ`0U6l;q8quvb8>=>-nEg*Tu68XrxeczzYhRI2TW?RK zV=}lL^3&6o;%Jn+BZvnWk$qct7Hr%2JkIwk4SQN|U_7jQ!-7cEfBWIfI5xqO(%s#S zgo;JP>v}Yt&VxfG=ji0*k(dS~;WAv!9_ zcbdVLTT(MJ?ty8OOXV0^Snz27jt0E;<~gHoYqQ4%OgfQqD2>ZusnLm-BvSLAwn4zEeK8bo1PJv6Ev&2-TfIDFWruL&d5@vEw+jbHx ze9R1-QdmeG6D5`RGWa_%D_?2(_*#mJLdcih_e!}OHY<|KoW^n(1|( z`EVOZetUO^$Dq+^@~}R;B*K8Y_X7omg}({D(_{N5 zH;cG!Jzslke-5(WGRJe+><7a9)zR18P5JJfFL_Q*4lpNu{}am7_(2@rfi2K{vFt%! z>3MZ>yxs#z1|2=Ox3fDsI(irU;|Dnh>c6qgk9DyS*t4Aj_Q>~e3m@UTAWB3AVHbCW z;*DeqJS;XiBqt~T&$g?x=?uYdJ&%3^|GPDmTIaZ<#E1?evGMlyo}Qi##3H7prv6j) zPw=la#?Ud(lRheAV`Hd^)kc5(qZBa)%~~s9yiG2L{O+gJz^DGUQb*aa{A))iZ}0n9 zdHx2^D|n+)G_KbZ*OmZFzm(v z1pFf6Pu8!-hd1FGrLAAqyC;7tOGFU$0m2psfPvF9FvtU5a&Ma9e}g3<7gyHS zmL(~WOYa{>1UYtQ3HjyArshxV03Nj-P|{;_je3LW;o}bk6~}~{8g@6pHInoEr>#z} zZnH9-7WdV5WNGOE$ymzAK@%HVT1u11r*n4!GXJ(OnJfh7n;~~BM4&8X5}(`SpUMRm zH5%dK9YEh68}BKMO}rzb?;G%y#Y}aGNwkEcp1_}Fa1e;B6!U(N#l|118Y*S77hn}Ua)kr8&fGhsPhlFVUE0Icm%Nz;*xr7Gi4 zPQL*}y~ReS;;~t#w%c|Fq&L(km;~VV*`w8t24i5}jw*zK-0E?H^|oh#Y~^~dyzP3I z?!*1Xtw^z2XwH0uIY0=RYE9Qz<(t&tCwLaY6G&VMtslM zn^VBM+V^28Q@?`l{_g8D2fQQT)di`hW@Zu-5{C8c)Q_HDZl>DC#KsB$9V6tlnO#_D zyIc!pYB?SMAsHKhf!|(=a5M_3s7T^=Dhdr9P?qNtGbt`Eo{Jg+Jp5fJj(o-5-+p~3 z0aEp>z)vbFU0q$;^>$wuEdu=gkx@{FGx$<9Q9z}G={y(mKGzdPD#axwCA~grKYsi$ z?1}g?0KBgE?orhs_x5NqiDjP$uXH(1pZnX23g&EuUIvh_^H2(VsI{K=-gN2Ysicfi^s*o-O9t?*u@<6*Mlr==SK2# z1D;g!cKz*x>t?d)tPH4LKu%6BI5?QR63}~55iJJ}ARh(e(K%!_K{q$OkGt^5j}zN& z{d?8{VMp)i>FH;MxK?}~?sj)A{ymNI_i4?zgoMq_&6v&@3h7zW?+97afdAhFRD~ni z+1;HX;FB@*1O)O$06lISw%viWd%nSeO7a#MgTz}fNE{AXs@zJnu)CQ$+KXYA*EMz?2CvY z+~uEMndY?+ukpJyFO4Uq?V>Qg1#C|X$UGYywmu@V5cra?*)H>*e7xUN$t-&eRiLcQ zT~wBofRHd>f!QmY-k)Pt_@0({BUxvam4rKOqZk8{Sm^8WGJqI&nlw?cqbVBwu@e~ih+gsd zC$(G>;CI1?={iGjqJ8`A+8n>YWh+hI!%&1a;~G54%`jntJbNJ&J$JSV(OS+i<(514 z$+rFZi?fomdURyw6ZnXLnu?10=g)&_T(Q9XlN|3uMl{sb)A_wL3Pg+z02A~sVd^of z<{}ifpFNj7ALHS?QrBeN;ZP5mE-0YaiN6Jbm-Q$v0=}jEq(4&R^VWUxzL;D6SXuja^G2Y`&oX59+ zjO9wi_5kZNARUnFdtW%xZRTivb!-*o|MqpG=nCBt-?7$fl!ef{b|q)edL!<{nGm_+ zLD^7@IBv8#(zt^~U98ACikWJ5@;L7jV;kNblyFZqnR>hnt855ZPdniDwxv6^aX|sD zt}2r>xs>6S5xq(S2RX zgcxP673?XJaBIbh;g~l;3tPvikdbe_=hZULwW%F*dsh(6%h%vG6;>CQAHmToB;OI`ryw@Z*`6sPg?W{Lz_~F-rO_WB1}k7Y3I|_EbV>xFg~-an383- zcb`g<(r;chF@997{fuJafhDh$rf+2S`5I$2`bU*ok9BX61H^+@L}d8IzVM{`{B-%Qsw zs?%r`Rm^CysXxWU5jnbG0haKFRSY8Mbvv^OJV%)WG~T&CtX^*4%)=>p2LL zqWKADeN(_j9s5fa5$C~>Clq36d? z1SnQ~>B9;Q4dwB=K22e_Y%DkYXl>)m{2^0rD1zbs{bWL)ryyqFd*Vc6W>Unjy1Nmd zbu|Zm4aI2bXcV(0n;N)>`5PIlxVdv6^k2^n5zXBX?+53PfU&mTLxxx|-)x|1H4nVgjWAdW&P4O?{m3EO}LW;?_He z7ECa|BeYC~od*=nZ04%EQhyr@`8WfC&DYmgg2W996bGPHpe$2=fB)mFTOS||S>$~T zAe>&+B(h6+tc>i&`yUw3d_Sq9S-E~agvcgTEm8}L9N~$YKG;6N!5GiK-HDyWXNdTUvTuGCw7X~ zJu!Rg)yJM`T)B$o{n@=p(l_7_8ZW1~gEqA?1(n5Y=biFF#MW?@t^2hH3Ic%vv+hes zo1GWoZ*aQ(mO%ZBM)K5i({lD_qt87Pg9b~?Z2p`^u0z6x_0jEiB2PVYSH2j>-l%BNu?3V_2@q!Kd;H+Q8hxH{o*m5clA}}s|IBsbMY?Au z2;&K`M<3l2k0c5IyP3PwcpE;p4XbiFJy71gv0;zgGfv2+>ytM=4AGt`h1=i`>m8*n z2Ynv?(XYH=Rac(>?OXF8-;6;-C{wD^5{H$d1HXVJUg|TA3)AQ?p5nGS@}1%w;J9cN ztMfv>#QG!#JHPX?nu6lb;4Y5Ao~)2C(hR$6PYz?Piks}Xf=VIG<3>)eLEIp!XTeA7 zuSWRa2lHXa(W*=rX}=uW*FX8rM$`b^ub>F1$^7tbN6QPB z!FX0}*oE7o;q$)1kS=U%PCRn*+YXl2uDb}8ybje06$nYV1Cu+++|QZhp1A*E}Dgh8x8m@ z`gCjX`exL;DLYg)LxY9Vk;`xRTK$3@Q?~WKi5*a9Y~vhuwJ%~G;uDSHFMOaMA@Tkw z*c&emC7dOz#b&Y;bxDCY2QJt5R{3FUyT@IhrR_@n%Y;@&hG(hf`Fs+{ zdhBV^%g){{!jZKn*`is}RnOGr#4Thl2b6w7qPX0g%XAP{CGq$5wk~o*bHx7j`AA`> z*gD;csD4@HrJ<8?6l`Lxu%dYeA@)eS7CP=GFDuGtsgPp*=%G zWtZNQ5IV-m=|ntOkXA1Puz`7O+umNaBYFe=(hd5#HK`Nl_z@|d0vgwnie3{cr9QCI zhE;iQS2OR;^b_@WuVvRYQ{L|GcpoDha41d=|E_lA zGyG-0&!#2+yN9XTgJihq91=YDt#WDc^G66BD|u=#`!U6s+wS~zNV~iXxwwA-q+V>+7fF@X9PPb+9ftagcVW|JYII{O21E&p91Q z$*oD6s=bKHaqjvuymD$rg(XK>%}FKmx3U$56@| z0+|^?>}Y=1gYukeD1&T%d3+1c+)qcp+vUL7eY}uRvoPX$@lxz`Rto-bX@L)#k{Mi|5*pYFmG2#tQ4%@Y9gZ;!*LkQ6rm2zzup3XXS0wdDxC9Ko8oJCFn!`lgOT-A9 z8sl&tT)a^KE3|@^%7N_$wMCA+g=%$DaH$z~S3S2>$HWHvwa~XPuz6fVAL+2ecj;JM zlP2laV80bpB>wp{J#)b#VM|u~D7n~&^U(26rL`K_)SGUN_Uw8jVS19Ezzbbrtor`8 zj&k9W7@IbJ_QmBl1YAx!KdDtmoN-)of+MFsmmQANEEyyAyWm0v-LcUeLkL0t-RN+MGl$*5Okp{am_` zk~**SeDyhoIFyo5d1+ZPpHtrn^n()0*#^{}t?IA=vZF<6=@WwvM50!;-C=4Dy_~+4 zceum$F#%o7gu#&*j&+2#cu~V*Xr8qc&MIn6yiGx;)>>8{E7?2EtI|F6jChq#l9+!^ zl=Q*I>}yAJ3eykkohdn3j8`)~=a42hOR8|VZ^S7_Q)Q*Y(Jt&?$~j=p6^$x+{(c1` zpY$~4aGt_{ox>SpIAG+H~ z^=XJ{=AcXPazo$5w(;I;UI`jJp4|TRrBv+iZafVOk$CQ1&`c`R&+qhRXhwOD8va8+ z>jPU!a_%J8_ z?7=Br(M{RyS#q_seAP6b6i#jm-3ctoSi|?oqF+kY%#(iFJTbr@u4jo9T}20HjY*Cv zI8d6bsJr+4vRylVw!^eSEyK=%1`ugIexb*UO$d}yrQWp*SC8lXrmQRbZBsQ+fT!VG8h?* zNy|TqS~%~_g%IyieEW^4iM9Y`@L4g{e!lu25MkCs)^>MJn7Ch2UtbS^3grOQ02CGe z7BJKKJe&s-7?uGf=Nhs*F;)4c66*2QlgCbdbS0ijlOe2tdgL^g$1?+ZlEp;!wg{T* z1mLItCCztvkEIgclK$9I4Twu3ig^A8tt^l4p)TWw1f!$9Mne96hfs-@;)dVc8)+!;eZ4OhFt{kI_^7lvCHHhFz)8s0m02ltcHT|5an zky5iYZCYrPx3(@Q&pbj6a0g>UGJWo^0H{Z!^gQKjK4mA_TZ`APUjsY*RrnS33)t`k z+6RE2PwL))uE~Mo;3I^=_UlwIjMCuHpR=BHj#|p4riff%@9jd;i@QsH&GUlm6}29n zmOU%gU&X7&@?t-Hh!{wQHqJ%F zXWy>WkFHHU5<`;g>Atd2`To*tRSZlQ`QbsW`lXhJk_4c|S=>k{{`34}

v%~iH zO23{k0f-+Wq=|m?B=jdUP?+zQc{sNzFkt}_F?Pz`{)>SY{+}I#5CZY zAX0!h|2@xMa3uE#KCKfZ=mEYRl}Hc7TmYCGA0J;^TLZxVj#5;R7t-G@<7S-?=tco!c^v;?d2L0G^d4Aae7Ole*7p5Ncje}V}jv3qj$uj zTM*Pw$-G7u&8Hi5(eP9&9<)|h-loS~iOMr;S+K9=f%>M=X`y7x7@Ta!D9p!%^$NQ1 zQ2NcKE9u5ynT_^NkRiRU8ZpVpNLleD&nK`riEUpxNLD#mLYg*CqAC3v06z(HLRh?3 zb%uS7HfJ!EJd~0HwSeDkz2tgnET>dInauT8m`XSHz+vm_&s$-rf5lcf9~uM9P4%#W z*nvu`TnMf{FlQqF?bdS-o@c}vttFQzF=ijI6C0&CA3THlr!_itx2?Nb1Xk%I;PUy? zAL^3})WFq9%|AsXFzeBG{cf<)AY8IsHI1Ptox?`yR>jFK9?d99OkV|_eT(JnA# z-Bg>~V)qtK0w`6A#jfyerMvXGC{xGNhn~5-Y(%Xq`B|!JloufF4~q^QJe&?|;*3kp zGbry31eNj*8mwEVc)dPvmGq{NQ zPLI6g_WX{vH~GM2&B}2~-5MRfHqT(qwXiRFRO52GRK$vNm0P&5LKoZ8HJ-pGlli?@ zfI9%nZ>JBMSJBdH29R7r%y0yt?4wISng;%^2^z(p{wAS&QG5DI^Y&zb31ELnN=dB& zD2(l2zRxw!Z^v3D{SAD-8^d%1zb4zp82YotB<-TI((>|_EXJXT=%c=no!0Ts{>dJP zr%6c!J+m!&yXasib?4LfjRH8b?n!%aX+tbbLyj)`VtN_0?2DoyEJ;S@LcTBBB@iH%}Yj&B&{r3Cd8!Wb-h2TXH%~8yohJ>8Um>_F^+`(m9tb*^2W@|XC z^-dUtwT$tUV+-dVFyKB}$P%R*H`~EOtwonrG-SrIeVkfY{8ZSQ}WG7Z? zSe%?o5q$I1MC8n0aj-CWSml;;!i|!r>VkKHC$uro6Da;+GH#&EPQ--W4|MB56;Gzf&#uY zfG!5Mup>*oouFAlpTxw(6cnB(qJvtV{RK%bT*-E(dYvfekkO%`FEhC_R`k#Wt; z9@Yb}Zyb3NBa~P3(2@RmOvLS1Tr7?Y8OStMlkdhv^Pl>;-Olj`_f^W|R#=DNIFd-$ z@MJJDQ%DYuM4Z1yOKL&ezcYY8OOEgk+j(`9vtun?+p8ojiil1*@y&KGJ*hf+xpsFY zWV!|M*lPmaMdD`GVc|7HWc&t|)HcOQy*BYPGsh)$b`fA>bIZnxD`slk!$@W7DRe9kx%e!e zzb}7lG5+#%Q|-OEEH|VQINL7Otb0WD00Ngg_Ft!a7`{H?B!dB_K8e|BI!8Vf^+=`9 zd9plqI9iEoZJ*Izee;}2{#cZ;uhK5=k`>L{u3{(uNi95?T3k_OqO#WVf7n{2om${X$aBDVgAb=wS1_9yW~(&dTMUQ-^|ahy`A38FED z{)z3ZXFk^3&nB`7koghl>W$D>Usi4Ks_#D}*(^A>dM}jHKI3A1o?6DoK%El2W5zkK zfc>Geala%boAzU3ESit?Dd9Qg`nO}z)v*_+bo3+CU`#$Yo3L{h@8YBVE}1{UjgG`E zbY8V)+PW}W4C%R2s`(kOCcOPJ_1z^j~%D64}Dq9~fAusCi9 zI*_bH&~$*M8FPsf!L*cGkj}0#nXjloVPnYEQ?tq#uky@{alLgeWY$-_FPOk&zKoHl zxHMg@FIhv3j@*Eaa4Q%OWmGv{%b4H^fz{@jxl=2su^ZrhIPcF2y+WquOM&^E~ZGe8V9F`<;Cw|m3y;y!~) zi||=KjOa9giFk2jaMAmb{f7f`C;8XS9UEp@MXjFNUBSGq9Sdjw1+*m*0A9eO^Qn3f zs@)*4mqACwd8*`Uk&yGlB+1JGUvyJMHEv@xVg}Su7Ia}>tx1qkE1g|PLOK36eYQ2+ z+9kZzP8Gpj!o~)+F_35~gI~4gL$Xx`Y78{%R=~y!g>SLnoFA+MW!s*=ERnnP-Ahsv zZjZGOCi9^LSY6F;67sSyW8Ne;{;fpZ#NA2ja%O54H0=cHyhOI9Yt1Sx|FEbP@F71? z+W}@j;h~t1?qScl3OGP?e*f_G zMujCm-jogLxqqH%8Lo@SAq{bMoX;<;mbi>#S26!kf*mJ9XWVTc90s*HTUhKtkS_#|`JK8{NDAN@}n8t#Bv+&G&1 z*lxdtXARuUO`F_ODtD-prm z;)Ea=DZ=-N($~Q(PFAq=-NQEmZ7>XO5KKOYv?K||3WSD^u!oPE*{`qjlPP<5j{9S) z;{IS?R-lAN>uQ;>-d1fY%-13^F166p_pnaf8d7$M?S0bZJb;c5S!wIMhz@pkm>-5% z$+5`}OY~_eMVkJ$zWJd1<)>IIEkoW^?O8v+a#AZwqJ0eKXxDmqd=kFGF5WH`V^`w} z#G;WHV#O!C{UX0QbzOj-GkAPEj^=pqIV>jYfyLDU;WQ4QR5jMtI-`SQaAG*HX2za@3}$se93nn=J=PZ5B$g6;g@c zW6H#4PVM%-KOuFE4Za zP9iVt0v_I{VxGD!ldc+%bI|nu>vA(S|9rVq3#l*6<|@>_?GC{Ml`ndKF0Y;!8MLzgEa8^9vHI8V97RBGy5FD)tQmamPq z>79vcf<^7O%f^(IiHbnZ^w#9gcNi@Hvzr@jgDV5wDH+SBnrIIzAF;!bTf%0`_x}`GoGEO!6^*(Ky zVBJSzCdYqOBT3KWdldkvY01t0TE@-c(Er@|bykl|`qB0%XS zxozH5|B;1DGtn)**q}1743z=EV41j|=wzP6NhJQ^pBu&Tij0Hl_)^u*L#3EL_2#IB z5GP@{`I`%N+6xi6Nl7JT&Ia&MyfnEPD=W=0l6TMFEC^)O3z5~Km-d2|LQHE)530M` zIUHToyOulOG2I7dC22LEMGW!DC@H7HxaBzBc+-vt$6?-{i1`_2>vU zqF20c!lj)*YiTX+OcczNsIytmgBgnl{*^aybar+R1|#)Bd?o)}BINgQF z0l+#2XuHQIKs6*2V6a4lhf=JdLl?ddd<$HRPxDoGOr;#L-X&hXq&!4}io|-UkZSUl%J zAMFt57jL_Gq>BS|l*pYr7J_tDl_*>s%6M+{oq%<4c5LEbnncN*6!cm@@@$>ja=>AH z8o7RfEg9I}G6hm?(bgQeY!pt=I>dJ90+yOV=9mybwLv{Mw9{jb)yr-MU5TztD2G1T zR$Jxf2Y;!{&i~F>H>_`?cZnmL@QL0L(L5bD8UU=}XV7olXWod-j(*O_0VIAFhqrD+($5JQCX^M42ijMm&_0s&rK^ zg%|yuS4&Zx4`?y-%n{^b)0?+*X!|wKb7<6{K14WG3Lkpt)Xn)xMza?8c}x>LJ_;w? z2Sab|dXgN1`^?a}1jdxE8wgoXq%G><9y&i~<6pN<4v*1M7KAHLvS5Zh`I*~dYXt9` znI6?Vf7wVyvva^PW#m7H*0PP4xNzZSeBTq@dS15bwmAZUS;>XX5S=4idkaLVN%({q zoGftbnCQl4VlzL;DQFu*2KVM6oe7;|oV+zxXXCa*D=OQsq7=XQ=mWef&~~B?`_lW! zToQ>e6!#v(j_sRHfQMKC0gW17>xPNP5}q5_rF?6k+FS^_Yd0|DjA;u>@V@WmZW0z& zQn6lqZ|NM+05Po5Xu9+tAHh6ZcUK{I=~7?K`0~tTQC6h!1e)g)w$)9a$HNo$jwk#x zjeaBJq!Dsc;BC_#3mFU*qU?NxUJDm?FMu*tH^8Ms#uGZrU3oImpI494#3nTM><(Cx zp=pey<%kW!hfV$IG3K08)TcFKS?3n`d`SksQC3_SWh{7B7)6?sX0-NgT;Sz56n)g7 zmCcdpuh4n*N(jr;E9(1!a7<1d8lAzjPh5LC7%O?^PBCb*s(MDR@g&WKhQ~uANX@m_ ziSA5Nu@h+QehL3fMvNuX?%oe2t>E-#i*jji#4riw()LUsfD)!izklvl8K`HszXd0I zcs_nT`wwx_d$(iDl0wJjgK0uYckETGS{T`@#1Rk>5Z$)jdEk#*gUhq6uWH^u>t$?W z8wFomhtvRgyj4vjn2936+s3MYH3S_sr@_=>;@&H$Z5IL+*X(9HK$j zO&0+oEOamH3s8(zKN0uIg&D|=!IWw=Udv)LrNnleYAs0ld3J=jaxAlSF>QpR;|97g zk8EIZc(f!f|5SZ(Xi6C~g@AwjL+@OOr@vUrg-SfOBUoyXUx`w41k18$=*R`LZ#qPH zTzUXQ&R$&~JkwIfn>+g9wKIi)%)|H*p4dRIWxvhzksey(FP4+NL3j6}L6>^64QYc1 z>4?ccLw4Ws18QZpy%OY%x_kz$hWHzggvC~!&!5j4G8qS^QuL4{GD>LwJFw-8P?FhY z1_p=_mCqlE5Vj62_yvy+5>^1wqov_@(CwY@RQUwl(WtOKED{ zbahrzjOEi6!OG8Hf&|N5$4w>pF#Wh6P0>-|A!H0}cBtXe7$T-@)#2GGo*ihyyLx(M zCFRryQhpHglF9;gg%rY-Wt70-X;+;MSN7ncj{*(d{0 zMq^`cEVcI(m+Y5})XjOH^0VIkLR*&Xp8i)EIfM--h$1}*z0tpXCt0EaBm;D^{ z9ssj6xQnVi{qN`I*V5=YaI0XL9uBs7!Tc`>=eCl!yCI*Hz640)yDROl(>-qp;Zuv4 ztXU9wWO52hOsW*&2~66I6vm_W&Pc7ba{iS(;60mIZTmGU+5K_$uGxpJ+$ zNMwXlebO*doB}=@S;AZ|5uH=x4jjKWY?!Pwu~XHPxR@|~G2gmlUNA;$`qZS{ch;V( zTwGskE>woK>F4!>8qPN)R@Hb$eQoUx)C2Iid>mdNWj-?VxV$)~Au! zufucK3S*|SC5-u+4ME$s*SeX%H+KlCP2*1%!;&g+k2gIu%uRU4wJ1_*!JY;Bbg~LS zVauPFI)tj)8e%Mi*Hmm-|3m4OQCB$}^M*6gG;4L2^ZG=iPHmY>N2|(T)gb_rU*S_c zaFC{^w2cUPrb2ABwTlv+hX;H`KNn8cXq)eJy zLf=YU&+|y(oT@x>xTwy&$h4$d4$LAQZ~2i>uYXi zHQQ>!X0!S+t<~;tUhJt&Z8Q_r6X>=B@WI_{h!L0UK7>E7qPg|l%3R+>s#8kd&ox{% znjCY4UD@gJ{PCY<@C=qpY~Q3>xl@hdksD@$0QYjKX^3O36{T(Ir9(EHs#*}*2YB~68e8ZF!lDe6Wo)p}>cp?I=; zr`7w{0h?=^g<6`XCJ++0VwdTvo5s7PuF}KJ9k_$;K)vhnYlz@THL23Z7x&syf%S8# zNo)=KFzH0^LHZ?=JtlePUCU%1!!_)r2JEWjsb}XE3{CaK#7xGU7>>|ia|{!y?@?zx z{}^r5JtIjO&Ojg`xT6)%Rkk__-9nMUl4Ro7Ea!h8o7Wu>fP6*&JT(89p(%U!ozI#h z3`^53KV9`x3PBn6yj+v_)S?Ny`kwHtV%}9~jxm+x-T=Ds0#0s+@EvdG+8p8lN7Ljg zddb;Fsd;&`sm8_p#CWwtUPEWzHS0jZDYDIl0!@*4SB+H_E^ik{2tiNgpbSmvuIGYc zq%?aE;Fw9cq?^a4-($CJ%kpfy%I49#zqm-*Jmg-4MI-(~YKpSJv3m z_sH5gWc)E1(rS;W%$XR*5$E6>Y|C4^bt(i}e(z~S}+;3#Y9LW$i zmKlrd5z~#i=`KTVqekL#F)JIPc)q0Pp5EX$nODH^-DF<8=!`uQCEO?j)>*5I7iL;( zVm?dKAP$rnr zb3Bc!I}by)ZmYkTn%aENTSv{_l`0WzWZe`a3q7QRVP}K(=3bsF$nVpcd+d{IH+&=9 z39(Ne!tCr!WfKUJ-M~w%Q9&&br{RQ;JKm{}_bMB<>n1cV^nkH3ZGt!ziC$bpB&E}| zhJ=7Y=|7b7uerIJBskQhpMYGcms+PpWexa8KCh z5&P&23qu=_qmhl;UWev3qt|&Hx_a@pG`4g^9GvbKsSiSaQqDEG*?KL5hxAiAv{*96(@q}XV@m5G%-^jFreSaIHOfZTRf;}v!#>qD{aQVv4T)QQGJ{ve z0q3HM(bcHFgV%ZpXRrsxbzY8G=8{F0-LPIXv@9C4i{05BmyYI&&Cu%v!)Hj23)IN` z=`}7QS>ZE8nnf<}; zMLv4IqsEe=OASMD+8^bx7<{Y^nrk(N9(879TMePPX(>g_S$fJkYuFRk4LA4HYvA2P zsw7|8n<>F=d$o|$5+cldYE_JZ?b=(l!92`kE#8h<5;y+GdZ+HP*H4oHOPcc#y|y5z zPvnSsam>fq_u|x!CoxjG=R@1KvkTfkH}Vv_M9Jxr(Xo|f`>`Khjs$32T2e$@UxeqS zqPi3|P219PT-12cm@4OdlP26bfq+X^_o_<_{7*dR+tBA-3EA20YCdcC?3TX*W%+#y z!8}Q^Fz#ktuW)i>Ro13_xAiQMWi(6pIp{PkpZhw;d^b}!87!x3YV_l5u&p$or#_m- z;*cdox#s%NQNG)O|N2iu*ZQf7rTFKTmyk;`7Ik-C*1C*=Ne2(Lldlr+UoyO_N{O9a zGD`G)VM$;)pBB#XK2nxAIjd5RC1D;LHE-f8AqK9r!67WUjs}^c=#*F6C*FfOop~ro z!h94)kzYKzyrztHZol$~xoxy?2>HbHeHCb$4j`9PSvP&k{7C)=&LOL2mfZuOF>pRT z6FpsGcmX&c*UME>TH4~(BtcgU9G(gX*lHZV|FYG*UTLdsh~AWa)LRf9BgqnZO;$UQ zs#!sP*x2W zDHAIA;$}j|g%v`C@R02~&ln?fnOJeI=a^w!sU@>)3jV>K>e% zuA7R$l=G9&<4k-3xRSTg^O@3R`4$1t9%Df9@jjlt%DZHGH#$u6m)nG#C zr$jTshe#PI2gW3@Y88?Gi9=}@d2LyB`V_gzywy0Ha>FN7uL3MIJ`E}L<`e8n#gYDR zl9qm(rP{w8=M43>?efW2*q0;^6D5o{Us>IlXTif5hDAdTgMos_N=` zIbUOGxNh)pHR1!Cg9L_h3z|m36M$oh!}_A4o&E820@vGQH5x z>+T7(a>q00a~d7ab*wn3xNTu2q!zHICNA?&P!;_AAr zpAaQj5~Ohw2=4A4v~dj{+`VxN!GgOq?(Ptv(Gc7lcXto&@Llpe@61%qS2g`7)zy73 zee0Zk_FC(=2U(A6h22M`2cwSA)k+*)TZ{{?7^DY%$c$s%#QX3xT3EB#*<8jmgHK2- zq){{Zxd8)49IZwpjwaTr@pxWQ&m{Jh_J>7OO%A<**8M8fbm@eb3do?2u(CUZ!y=^4x^>)dvoH+ai(@MS4u4D0%zi z-f3;Sy4akD*9*J4)pn-1ql$-Si)q5p4JFq(LX_iK_2`WGM#zOmY|NEC4arLK(hY(?MasD=c2mcCJ z2~k0Q{w1)qgcD>nw3Sy@CUV#kyWsP>o+dErKgNDbpAk3#YY>s=r8mLz+|ts65z_jn zem*{JKu*xw*tn~7RQh_dPR4ymnmbFYF3HgHCl3iFnbET=aG&0R{uP}kW zhBt=0Z6;9|r{|?{!iTP;)I}~Je)8#XAE=L1AuEGX_koJHBPR+xP!S`@k=px9iriDgBYshqnHF-OH3| z9n76$&rf$C(sjDyLk%Jh8|>G|$H!4oQGlKAyq|ff@wk`T>#9%ZyN^#(Q3zNC z91l&!FYP^0T!QBpt=RY#W8~VTZH(N6V-o=5#@ZC*54Pggah^j*@GJ z5LQEe&da8f^l|bt6Wi+M{yFBnA3Um^Q3#;sO-hD4?+oZK=AS|y9K4P>c~kzgWYbxj zy+2}_azK(!pK0F8DO$iBSzK;VfpCxoWmr^ftZ|<2)ppuot>aPff=YgOXl(U!RErlS z)FofO+~lZeQpD$2BOi|u(VRf+QWY!v=#(tbFroY9u=qc@SM3|hP;S*z36;%`YbkT; z8eK~Dq!JZSE^zeQp9V$pfinu29880Tm`l9)7~wySdw$PsHvL92oTl8Jol%dl1U+ye z!0GcxQuDZ#BaLc;VDaQgJqWs$&A3|3(Rn8J2-|8!pBzO?<=;zIMCg`0AQHLm7}Dz|(me)38^fxtL=KJUY;J8f2Ey8il03Gje_P z@&@&e(gYiNa>oWCV^e2^u*&tT>s7;DLItH;A)^|HA#TwQbS@XaM^&U`<~3fNoJQ0; zoGq&^gV!k@B%_!EyLj}(BE9UqN>8>dNxH(*&A%#5>R1$cO-Cyqc*737#exs$=@^WD z7dka5i7mhhk5scICg=#|Rb9=_il9Za+hvmiT1~KVPhE1XxPiOhkfh)jX|#9gakwP? zb38-&g>Q=LsXpzDbuj)8f1@(KcXs*pjHiBE5#Ff0a)bT6PrQtXQjA+l^sr9-Ny=yXS3oY8ib??31a@B;wjN?EDQy7wO`f1GQzSs-r zVY$1d!KJwk#HR9w-R3GNXRN({l7glyQ0~5KMeXN>f5E9tCO(=?$n>gzY=-~VXeS?H zrLayK4yM#gD{m*b_AyMA8LUb(z~y##*!(eJuVI?id)K6$;?~T9F*FA=qb>AA0jcPbc7}FfiD(JjfWIYbQAf}fatZSyq4qO)0fXRUZGt-q-A!0c+)C}WtnAP zebpshzuv6Swl*cXr=Gcav3$;cwd_@X;=wJfJ+Pr%Xd zNM`l-{Wx;6J2lJlIYF(DAv*XJ6eqM!n)zc(P^3hOTXR9Y62Ue(NptB_h;6!JfR)A z+hea0idHcTD(eBxzwMW=LVf4@oOZg|#~sElLrsxF>NAFmXn2Kd*VMVl;q^sktQVor z0z3g&MKcmAs!Gu~z=iPGEven;hqlpvJ75G*7-zea{CB6LKY#wr$jF#1Wo%n=`gyMs zo(sw!X`K5D4A{t@X=Iulv*CvXl?-?kU*w~kj`%5&t>0C0CM9Xem35TT`?RN+^9Es2~DgK_pB+;H| z)fWb8SEL8D$~qs$a-bIWs)n~ZjuYkP)zYe|_?0w1?zF(pVi*fD`+0U2Ki)Ab)Pq$` zdPF?|B_b|=rerGkRBe;8mycrIro(UM`g#V?ITXKdC6qB;t-p1Zx2L%@;F?uuzGh%H zfBSS2v#V}5I9@B4tfPqZRULbx)s2#eLK>IbVm8$JIX;ixdHzY>)@p9)W#nA;IFbt!Emo%mDLYPe*xi`m z=wddWp<4?vI(b@;5mgN&L*pl+Ia*J581dZ1AgfNt-8ty zq=@uyl#QWk6M<14Yl==aCA1<1YgBC!VgrG9r|qikVf|hYds#>i(eJB&C0a}TDdEj@ zvN+B<=|c)js-b1g=)FXmkK|lr*Q()F;u($-qr0TNtXZg-RvRV`8Zl~#Rvs~W-NcJ| ztW;lQH#@cVncvNuG(6Ke4#BOZUDlC-G};)$@dH_=qJ=AO=$u)3N3_&x^vY59Q@S7B ze91DZxIAh2$7mu&$J3&>){c92k>B`$06w~i=V4ABO*+VA(EKrhyL$H-9V+5(-ZRsG z%5dEv5=$l)@iJW&XY=D=jJn+9*u8o1xzcW8S5%Yjz25Jn7l`8!RT?kTZC6L%6y`fG zmz$b0)EzIXW-TYE#^^8c%>sFSik*$hxeeYOnDpJh2Y@G!LU#eNQx8a0OTxWl%E}UI|J$hYO z*}G`I`+ApFID!A3c-ns`sgsj8ubW{f$gnh^jaq-}X<%bAj8*K1J$o|t^5X8G znEDE%Bi)NH>uW`pM+{!rqfot((5lU+KgA>d@?PXY2~5F$^TwESct14(q03hy#R$su8Igqn*CBmgtC#M!mYH;NfF9 zyc`a93z&CM?n=;cTEBF$_9WOwFL)jbw)>UxcCW8np8Wah zf1>KKe>XPV7eI&I{_o4Lf<$J2*^HwZmD5D&o1!_Ef;(UHyDFYe)ST7wAD)*J&B7X84Ed zdkMs})i<(f)tl#7G>bRaqY$OtZa@V-mt;F(dSJC?9G&myFh>HPG$}K=9{$Q`p{AlD z4yRTr#%A`vOG9q@+Vj4?=3==mprPhr=`Ze-Zd%vY);3QzO}olypxV@^RNz%E0OfG_ z$mn8u_VL4?xrEgBWIIUja8-m&)i3cFuPl|j(^hb9bS3i@7b)p^LI5OGgJB<*^~?Ad zcWP(bJ5zOEj++FU3F?YV?fAfEM5XE9E9pN;c5hBx(dgXRz|18(@;tvE8!Z2JE&(C} z+p`VJa%*dbI5D1a83TwI{%i6|O1gKhK77W#(a?+(o>5s`CuL7k@^3irD6egbnww_T z3!|`b@fp(l#l-_CZ(okOQL}7Oi$#OzQ2O;+f-c!s{E&Fs=}`4R8Xjp^FZ<(b zLaT;_pcd;gY4ML{DM(=5dt8u@VU2HcjSM}6`8Y$8Pu=NoVPHVmtJ5>TdF?u3Y@kko zNFY1{ll3m|QcN@NO!^VJWM@|?{^{E5;qbv$4GzA)@On@!P>KTSE&uk@oOmY8n;3>e z`(|Dy+&2Y#f=8SgCQyUbaP60C6rWl$H5~s4WKi4+Q*cVT_<2{Nha8ZhFeM&Z9}aY* zro)%Wz>_q!Uzi?tg=D|4b@heL7movf{^XMF!Om4k5yN zrNX~*b-bij{~ndH$=iu>o{;Mxut`3;;P~^{q~O%B08+H=JQ2s{+4>YB zbB9(%mj=TFbA?%2tzLSJ9pjVg8dM&Z9UhQG%Y@g@uRlYZ2LImj?+L2Q#vff&q+5UB zj+MH6*Rr`H{=EnF^s|QDEe-Qxu>|zQnoND zYdoU#oFuX51j12C)Y>xc#mQ1v<{nsjWBkDN@oir_)T0Ar;%ih#lafaoS&E0v-~ZZT zSLm;SFT_k{1ML$i!)Al#lvLtOE&8F5;wdtTt69fxRumMJ8UGs`E0|3y1~~H5kFW#^Vqp?{850RXTwIpI@73Qn=T|k( zc*&1L^=DXe{AhxEs3vV@$0xek1Bw?#Ms7dQGd3|!n3O7y4iczQIkxS+J8s{t5rJ!= z;(=rpqp<$iiKBA*-KLN62g?ToIJ$J}ddGcSm(L-&hhOzs^2LBmG8(8k#W%1}PAQ>% zn8Wugr_OJdr3?!C08WBEF~i>9{8uASQPz!g8UTzM60?-XTiz$B9-prdKD^^qatwhS ztQ`Tp;#G2P+R%7&N^jMb1!+E758UuL-%jQ%t9WBy9{Nu&pghxqY;w{CW|-rdR<&tuqXhIpGs<44f~$*_ zkMk>g*JCJa7Fco$!<$p%ykF z1@ImEwoQ#i=QrG}fL50%nx6E?1YMk#hW+PqiV8Wc1j2Rl8eQ9zSGXM?A8|9@o2PE(IZ&__*M3q0 zz}QdVYLCw|@DkAYpN^h%y#KI3H|8&I^EnC)9Jb`u2mnUillfQw0>2ro*Fq4z5zLZD zotG)-0`jRTu=Sb@yUkcv*q%JmLi+bzCT(>EL_L$5y~*s8&c0rNLlX7hZv?*dzo_3Y zt9Q!qi))b)c^o2>jOS@37uece`*wyLG+!3UG{q367G(bzDCLo1H?QQ=mKueIKb>n8MxVv*-WDBz`9A9Tt_5cbF5esPU)E;RT81Ka><@oRv+z1;aM?Q#5U-4H z)^L#1cgMh=7ZDlXnPi6ASyi1`YrS`DdEvHmP>3Yi$kUBD9cL4z@si;e;7xw#&8cqH z>{kj9zqB_}HD`VqqPztX9Cn&dIaTYf_+I~Wli!{WZ1lu;%8f-L<%r&OX;R-+2TIjY znW3~{vCB;Mb{kAT=e7?``i~ZFDL54Jj~0$<6t}Is25Q=pFaLN^d5z2KR&hM$5AO%@ zk2sEZ%Yo2(33%Qt%ywVR$oM4aGT0Cri{FQd9n=U zp|;1w;hSM8Z21-nE-uvQ!WoV_r3~FFs-q4D+2{JAl_#P!SYj?9tBZxBYiHNN1>G)3 zSI2Q0ql{FGO_+pjU-h`TXQ|qW{aN9v?QdHQlbW{aV*7#Mx;E zRAW0rIGXgUo-6~I^g5U?&+^D78&`BY>=`Cx;DfruxCV}N%z!=ga1bMG={TP;nyqe%F{`(F0s*VzLAEN*|@XxZ}0!JFYi}n18p9wA=xd zbiB_Rq|`8E=Pwk|391X4+*e`0xL^H(i{JDVb$=>TSu6{#q0j+yB4vH|v%)x&Fr@`G zT$SE`VHM$&b;Ru&Ttp+^-AI3J-frOLs-RVbmd^8fAsx0R?|JYwRQDANiT2&MQiSMZ z1W;W8(3lDvvxLEW*68ftEXML^H)xjVZgQ6NBL8_iLK!UtBu(8HPfssCl7V2T{+IIJ+sZYRCu_Pf^Zg=n08$5gi0HCK=Hf*%A{)xug7m1n5-m`v$L z+bUrE(Vk&X3GEW+$}tfk?5*24?}+Vb30MP&=lw>y!_8@oe{_`Ud3t!NClsv~;@68N z`c3K6UctME_>C}lMkY^6C?{&1JockL;RT-8oXIzq;JdieQ@ zZpqC-70#V?cpZ@kgCA$>vxg0x{)SZmo7+;-nmD_8RIE;eAgc;Rb<6;SJ!#`Mw?xd0 zvatF1wMZzGE3N0Suak4&YDwolUZ%Jn((PpZo2MYXz{EOoDr+mYDSVbHY4@EGlBI2G&h`*Ok;7Q)u!~kY-%Q+D*2q3Wg_DvO_T;&((Vdmb?}70T`mH_sgziJ27_I z#Vp~J_E&x@<0P#9A2aw6|6R!gzJ2W{#xugm+v#%)le6a~hZ8hxo$R9O6!3WM%dv{M zgl-<+WHvaE(}QBQRjMkm39aX1jufopn*wh6-ztv%ETpw_+@lDVP~7VAD%^#tJu#WR zvg{Y1txk@ylw_V4-NtBGS;JVbr7R%*&irKw9eq84ApY2?gDXiqaS%QETs3e}}j_4u%q}p-p4=;vAzc zvJUaf0n9}jk#|Xpk#b7+{`jl+8{ZXb0Rxi7!M8%e3i4Eo;F#&5T$f85@T_v+AzVx0 z?C&W6*ot|OFHb<~axt#E0h>V-_eFyPD2bO=3Vg~Q8`1cd??g%*2>E0B4kfTwx4fbG z!IQ{t2)X$hq2nOfr`7nNWKE}sx$#$_KJ-4tdwW}^!>kAQcBNTu96^OQOGUhk6*w>pyy$bmm#Vw zB~RU9pCBGAN*Ql2of9IhR_ALWnPfd>7nIo<#$uM4Baay&Mil5{3oXzUmS(#CJJ=+> z#f*b~=nH7cO7`M;mm^8M$}dfFYt4)mU$fy4EgRwR0DLIdNn7u}g6WW*f zj1M4k%{<6Ut(DlSS*vjg!q$@hy~!_KK?B18c%zD7iSh14B$v9{>5K#miudP-nojTe zCi&ki5X*z!I32(MzQZW`MCuW!6oG%xVdb;O!UM~3CJU}RP8dzUq9I|5pyqT#UEiGL zk}S;NYB|%b32^bw9^A9#l4y&v@8yOhBimt^642xQ5@2##=(kmhmhx=P797>g+qS3N z-w2@!N?hf-S>XK{g5NQ>u36mT91J}~AlXYu7(TbuA!zK}kgMILBX3lT_~mHVzYL?z z^r06q6~V)x<-b_W=R6ylH!yq08cw*LguTN;EiZX6HF#AY{Pqy^>Catftx9v=O5+FY?JcoJhd7d>=OOC#B!hmd!piBkPkYUcy6gO(}Tm zgEXNArD)Nul#KlQp{s!=v=b34y~@bd{s)uXqQnms1hRkJh!{!ZRpH!lWeO&W-l15e zA`|-cXtAd)XCLg{Tal0RSL#z3!A`3$n|iia6ZMDh{u1xuM649=v%}7tBLRCYO#LUW zp|)s9OY~|bODxo^er*2`_%*YwjR6v?D&gsMq!$F45%x9%kAWQAaqtmQZ7yzz`h-6$ z=FNPx6G0w57NK}vP>G^dToA~=9r!&XmHSSZ7vkPLBs40T)_lOdnmrJ5^$&qMAV3w+8=={t2x6aiT_Rwxq^G8D&jbhbqcN zNK`3iMP*s`0=sQWn#LO%y7jc#Iv+$>-I7^@w2nR^`5jaQxe*xyS)E{f5vkouKLcEp zH6(@)C_j__P(@?cT6{;3UYe3}jzyWKA+(juenb$>A9~>+JCoJXngiiA4UhUU7IHb*=<_y1p91H-`wg@f8c^fVY5IHNAi}K4!encW)+6NxlRw- zNpM~n#y{wfxn9Z)y2MLgWUG{$@d7>bliw=sOfS?hB0UqR#dXc{FBRO~xoX3nrZ7>q zvujB_LOimkexA+DAjMXGp?|6sy|w-}E4ailyQ_VSqjP5{s-A}1_1zs(3>k0#*;Ni> zfWM1X|3;iLR<3Wvcf}b1n2XRh^@EHW&bkpXQTJ)DSKZ}wOYX6Ma++(;DSf9v zL5A=E^$k#<4pVf{pI`4}apj5F&AqyIEu$&yL17g_Z`g+8`&Z(1vRgmO(*i^-pt zYC{@-tqyGWk6`(1zH}{k!?U2nHg4+dF&x>KQ(HXOxakptQ~U#Jz+!Ou=SUuUU&s=# z9NZ;4G8V;Z9gDzVM%O&?38N zDAwnw_S-$6run8ZP0v=#&=IV)?&(lERp{~X^-bjmTQ8&Pa9-87!^dBymocsrJFwZE zpOOX*_J(Q61Px@pJfR^E60Yr2z=~K0h$Nzs`86)NUiBnOUUC0ntQ6swZ1AjCN!O4_ z{iaQ#ICi5?ndXQ+^McR>=1gPa+~YtKV|l8Ai9|aikt| zYVbSUpiD=juA4xzs1ijUus4TAfCUnaE_=<3J**B2r#knLIvlW8*zpjvul!FE=Ka z#7VTTB|#r`v(qJxOC(YTBkQdm(hXMSOFtQ~m}}2JDq1*GAjnW{R$97~&$l_CKr+pQ zT834p)}L#k`6X+gJ|6s@#|w~G;cX(4p734%s`aPDZjdI=`lF;__|lo<$YL@#FB&QJ z_|i$;ldG~uOM>tMN1Gl>ns6xb$`n9I2UT^)ml!llgkl4313qP%p{d9$hA(C}H>5R6 z&BV0=+EUY!7D;$wh(>S{o1*iW1nsdD`e{{x(%ny0B`{?BT;?nfV7*UpuUOo=k;nVn zTwHud*Av}yY*6tqddoXlkuu~ z4v(!pRhUedbPFIfkUR$Q+j5_oP^4zX#6Wx1Yx#C9PW%LY?CjTa^{jILMRed1E>4Kh zIKI=Uy*{uWTG~i#q?PKT5j!w zW7TG9YDt}&M^5*Xa4uWexs6ai_vA%x1!)j2`*38c3FX0YKtYpKaYRCc%}OAMNf%U~ z${d|>B|5&KTVuv6uh`1MeiB=)ImJrS)f1BIvZAmnIqbUk8UrKtcJlJJO0*>l5zEit z9{`?$e)4$(bue0dAfJTB)`5@xf-Cvv`;^wIVDZHRa`=WXw``3DMQpRw{oH;otP6hy z$>Q>*PD-4f6F(c~DgVlZPC?^UlpNXe_D8mSn!YeZ5qWpnb940}g-ToFY9t-lwCdK{ z;GWXXns{v|R$2H&Wo;MzK3;p$ z)zkk&=1A>DZz3BPInX$E=1`*xn#U1wBEqkv%knR1BmN^vNjD^;Nx7`0KoNu)1diG7 z{S2*Vu#J>2ZXfB0E}f8k@zZRpeQDs`=s@nL_uN_M{d);=x9h|{>MY0rk+C$>Mx#0{ zHg3VRqSMY(PA|3o4{r}IF$+toNr_Y>XxEs(IQMe~hZg$?YsDXN*K$GjP7n0I;2Sh8 z)pj?|8Cn$C*&I0~G;*$Ev?egSIUyXaf=pZkhj_F6kcckr)`}aA&#~ z_jG*Z%~Z6jYiSr|$xyUW2d8m8{?gBGUb|j?-#5DU%6f*si?p5N=97p=q9GIc+_INW zd35@0-H)5?1&)gU@?4Wg5|6)2LV<{IRg49(?4VJnB?AL@bzQSIfqv=xgRjlq+{EiP zFRCVke(whg(sjCd0QRmXitmKo>6p!6iVIB#)AQU9!zSKCj!ac;VvEeSM81E;AMV0x z)tnwub-+w>7mRcYay(f=@Y*z+;>+JK8qz8_>A3g14g@=hpXy!{XVbnrpEL82AxDc2 z(ksQCtz6g>Nqh=6A3EF4*#YX1j_KL=!c8StF*RiD{)F01jU*oDz06@x^I( zofWhf{<@11-~?CHV7D@mcKjk8G~-g&uK5CM)ZWC@vMZ!Lw)W7bZ)-j!UDCLoQ#0Ng zcV!uTd*A*!n9sb7`BAH50J~9$?yP@G-bix%PK1GJz;#IsdHMH|8~Q^trA~5{dlXEj ztlld_=aFVXngM|e{QL+d=uYpLPrRF!B6ur|UP7=HSd;yIzrO}518o{G*dN54m3c4v zid%{Cp_>tK?Mvn^QjSyOv3-e!;4SSFzDe#q2EB0IPL7&Ews2cWL9mkih`1OT#S7P zBH`CpW&FDRK~18pK3TC}3V7nz2&wU(N0plUkNqc?jk!dSHFS4R(yOIxognZqPG z3yG;J!YDYeD4cDJ63(_|{SNYLqpBNpw`$+JUGw=_G|X}W!91U7*-KWd zXND63JEwry5wF77+1U=sc81JrI1k0&QlSMn$J~QP-C9%#4-d`QU5*VhqFFfj9>i=v zBP*h82y(>$ZoaZqvo)=n)$X740I27O_&4nYaos)t45bzOSrrH312o` zl-HXU7S}>iS;bH<`s`8exO*bI*iKgtH}uJk4CfGRcN#ttjJN)FO9dYEw{f-d<48+SF)CP@fapZ# z2oh%HR;*s`(L2(ztuFyg8-I)fjEE>@!jH`t>NfxG4OZT;G=D(-1OAFQ8t-dOt4)~V z`Q;-R#frC4jy!mvN-I|DeDQQ^ow)P@%WWwA!eHl_Ez-cfsG;tDym|(o)C6es85ljA zL0b@zyn7u3X4DxtdE5S*MpL@B$Jzf6jHZzB-Ovnw(_VL&N&wEXe%I+c{UGBeo*W@9 zaqSNkq4ykG-HMgey_Leoi$x_PpCwkF0(OKiZIz7l(0bzW=)R}lQdM2tf1JgAoah3> zu1e+dv?Z;pzRD5+1@B@_E+zO7dnm@yt7493Q!{JN-D>v*<4{NDTK-m~Rs0e)Y*C!*f?ZIjB^+C~xE&Hx54*UQt0DSG#@F>_~Jz3%d zCK-~5hI}=Cdi~mRc+hgaH^lIux#Eh-@|jk6j)caEn)x{7A2xo?@cAsOgLfk<6@rfB}X>gbVeR;UzJSF`ScSzLs$6-@?l_w z5Fcs87k=M;j6-tKwgtt<0t@Z?9kQ-|GXUvbB*|f57XNIzp7c!9FM4}9lI(R#T<2O^ zUOuHKzg{ww{+0be@2N|11)t#qX&{gJsG#E4k}hbDX1$rO`lxLr&hpNEzGbs3w5>nrS{) zsRGI24vwR{OVAz#x;;BZD&`Yp3@^a>Ya3@k;D;?~v6sGH@=A5!l}XLiW7z-ch*<%V z8M_#xxoB!pgM%&WNla~6!ugertu21j4shk5$bn>N)ds%aru+N(^zK_fGhw z`FRDvm;}A};45D2G*cr^k@wZ6V!E8FZ`I6-)4*jI&!1dZWH_QDbjgjqbBL7C(cstQ zO8}EGgJyU6Rjo|fTT)F4l&a6Z?py96TG@+f7W34ysu=ClE}jt72Bns6`Dg~@1hB1X ztDq4bZWxG+YcAXidhf3Ycbc>HW-^gq_pxtkH{XptXWvBpt?{#RUgThK8ZoxSt#<65 znMn~}gS5!NiluZMv>sPov2jlHA+1FJ$a)s0*;Q>&RyzN{c$xX?eI#*3j z0~p0i{Ti3x!=iikIa`%xS;f|pArs#$f zn#;%jiD!1D@Ljp}er9!CfeH;P%~?SgMd9}UIMaVk?5(k0$Hs3uw>!e*F)EemkJE@-5&huYh9D>m8|{A#c{>`m*t zV^$5xW68UHaIZeaYQWvklsZUtgjt3B$!b*Q@AE)nz5@con@wz~^||LdZjA~$^m%qS ztvRsJG0lKR5_7Y2S3n?EAyCWt!ke0D#%_^sro6sBml~|d-1E|W0`6tl$9GJ2LGTk# zC>#@K@A~N$DjAjqg(WU|Y<{ngGBlf~nRT3+IR^_;!O!F5qaiJ8o3R3QA&L~H*2xJ? z;R?eA1bj{i2HENgV_T3|VqYbRf!^C4ge?%Hkj1*_GQyFkntn{4TBk@RM?~y?@#6A@R$Yq@9I2*H8A))iw zc-7xhBa5>~E2^$|ynSVg&oUaWllIXC%Shf;t14bz4wi#bZfgmML_t%Dg^^zP2lb!S z#uYSEm;#I6rYV=cn+on=?lgTvJfETyLcf7^f%NFLY4UOR2cjev-4P%Ea@!$zd7#u3 zu)N4CdOQ}V5lk=7piG*~_SJw&objEOzQlaS|Acm*(TA10k{TU}=~bRO8(8ti7c?~T zZsWragCy17nq;9#AyVhXF!lw6QH&DV>UNtv3XQ7B_-I(oe`q%ngNQ?2_?cGFLaW5oOT9kHko3vAB48bBl zTUq)Yg;KH_ErfZ&o^z>%;&({$ciS$QVYd_b9UyVZxRt&*_~JAuGjxvtGzzi_s|Ww#08C;WnZ8A6q{Ov@=FCf}h8odqP%Zk+y5r z6|r89?p_AY+&ooHbr4N1QHkFeenEFv$S{*iOgf#1mblZ)ubGyfeaS~DMyG^tG$F4; z;cl4#l%9=V!0gf|u3ZC3?89(GV8aLiy5{4Zbujcb4tj>o8;Q?~BtnZ}{t<{GcHCA?A9B1Cl+ zg-}A4Zu0I|f6bDtYY^Dhc~wV?&s)lHeNu`VHW&Ub*hk!DqYlaZQ{z-Z$+g0sbjL2H6BuwUDq*Wi8 z3oZBS-jq!>P*?Eqhg!?+1*%4_baq`$V`Dg3u5hVZ)>FU*U^U$&?P|osd$n9+3t~VMm{d}? zx&0|nI`s8g)?K`lwxJeA->)V`)U?uV?r+M$1* zhf3D?q*9fZh$?@emVtY$)$u{WLYc55#-@_}s0zx;M)rJ_BN00>qhgv%GCLDuoyYzo_hnm}N!OnPE&H{AK0E|w zOP`>cIQCG`kQ@Jq-Jf>dPDR5*%$>7}>HA(@>`Ji2_>ncnxEc7l8NO8l)?w+T5izY$gbN!=X>H9u8! zdyY^)oVLSOReJt@ezWY`Vtt=Q^VgRyYbv{*TtNrAQz{IRDf?x>L}+22o4u;Fqkfzk znRibFHp>}tH8e&|xPcpkfLg}{tUc`WI%k9%wmT)QPDA4&>s{?@6WlfC1A9%EIN6>4PO-){?r5nRFVCx&In=y%VaajIUJ1X>wx7Y%{uIaODUB5dGb`$~ z-TKCGQGaarn7}vvo_Urp-3j4$Q0ri0aCn^SJ@^I~^S8uzdp>KuHrn?4x88=-_4~(} zt*+09&R!f3fZ3hu9w;oo=t~f4^`{pvIkyu zG_JVr*3_&=hKCdEy`$K_&`-Kw>;Xb^zh{)OWs={_T28Ud~;bNu_<#*iH}5a)jDT6 zGT@8f2HY_Pe4T0HPH(bw)dg@z8B)&diX8X}MJADW6pcAto%tbYjER8nG8~8bF|NEb zUi3IAqzf2Gyba-c4vgh};yNYXWm>egA+u8g5zgyTaLXz+-WW{c3?rny0kTb{?*El- z(xlOwx%|b50Z94$Iu)zztfK_iUuzq@4#*c7>}#zinG_WlS(R)k(Px4eZ7IV_&!e^C zdsyHujeLcAjDyfr%|;`4G?$`^>(r7ANJ};Dev;!&!}TC*HZJRS><1>!pS(vH^LgPA z1R)kU8s%!qkA#g@=<9{(_x$xBZOhOJ4i)obqC58hQ!AkB0h13z0s zsVF)k-7}lFuJ4ZL+2{^WdY))KSEt&P&bQ8|&(b~zJ2f5#=GX#r*T13aB}oti14Kgb zMRa)BSx$=%1-^N&cZT9&VeOxOLJ*KANsVhZ5+vZP*JT%>O>Isf{8Q5*m&Zw|fV89j z7zHA1>@-xU7ggI6^s3jGbHQsWsgqjp{dK0;6RP=pR%F)v)XBMp1=9mU!4p46&I8)& zzqR5UG~=6(+tFNml}9yQhc@r^$F?lT@@&{`(93c-)$H!27u?G0V#(jEUTi_NeSp6` zyj=epKHa=uv>~)m+K~LmNDx7-%0uz8XshR_do9>uPX;2L0~0~g;k9CO9Ia_R=|Ekj z*!jSlIqIuSpKuO5|3qK7i$7Vw;g5cE%d@UjH<)y37rs{dzsZh4?Rmt zD)ag4;++kBGdVIp5iEP=yWe+RCKeV6vYJ4vBRzV`C<+**tZS_M6`R=z*`@LR&H*@} zxh?>4#pzb|r!#EM&CFVA{;y9H<_ZIVJ-9*pu;nWd!=38`RN!#H0@5E?#jjmo75BdX z)@0M2-Dl8b>nKSZPu5hXX`{m7$ER6dhS?dTB)4NfuGaphB!UgQT7z3eyW4oum@lz;+f-xMM$38NNn0D4*=a3h4@ct@N!nfh z^X6)>ARDQtlv60VVY&Tvo{GnA0xqNJKELl;KA1mBI(dM{>7310NqJtcp4~8L$)=LmUEyIsb}KYV^lb_(_zJ& zU&ETcFHplcRnfU7t}z)B%*Jh^EWt||u z1%@%x9d)iaMzP*I1S<;;ul<%=WmCE$?ea`BI#BEcS_-#=a>zvvg2WO=^stkQ)w|~S zbGn%=UP0%gnk{kKj#3wKVvisVA8f8kLEH8lAH!%rsn7cFs|JtuqbB#arZab3T}Le1 z6O;oD4u@V08fJ#-7h}=eQ}e9gu}(TquI3tHU9wr*C!hKL8V?QwIX6ywFB6ly&?sO! zTbvROLEc+@ge1Zk2Och8p_ZOCf}bF=@4VGN`$jUv8mfbqv26s|3!= zCDi~2E%S@C^hNL>4ejiXKE5*~jQn-l>qy-W>Hla8M75Nm z>$mCcJPyll@7tu+=zh%Kq=vKVWYMVLooee7#6%L*3#6B*P|(d2njK4wa`Fsc3pr>P zr`?q&zV};!%!^Nch7$8Szp1U$abBxk^jwWfiBC?x9aq)4y1cA|!5-&*@Avli(@;~N z0AOMUxmzF?s#r9xtE(%N5dqAbPPN+^EAYBMNkc_NHJ>P6{x4JaP1LTw0(-U2ZW5$2 zzXWWbXGF!LZK_OU`s6=BoANWke*#G|VDRcjLp&R~9Cp+u)YC3tC|Xm6KU)&3zSF$n zh(btMxM%fXzF~U$73AT}VltDcXRZvkmKOL2QI0`@gf6blw zKikcW^VrmdxDidsu8jfUx>F0FOiixx>xYDzWs zQ2Q=TF{&uBM#WM~go+^dF?a60|HA$Ll90TTe9w8#bDnd)-_PfrD_arlV0BgkdcNxD%(#HA-mSm z(P2Y0^YQX}{3%gr{~%Hy*0Y>lh9)t}%ZSR0j^+YACn%caBzg6qtFs%&P2GXvUc4OY z#??KMWZu_&7TLOSf?~Z@HiRJzKWOo2iBsUUVz=yad&oy55hW9#4RXP%c6E3nUSNV| z+~Q4hx;^HHsEq&=h_4{<47r79Q1^3$4X#OyNh-&mE04grlM7`(WQB=JKS4g>TvWAD zsQa4cJ3<}_W1f#p-^o(nk&(B5=3NtM;Asw3hUwFctFrXr+4eah<+GHfb(~B*|I7wO zVTMuQBbp#W_tJ$6x4CQ_8jZd>7{0IZ5lXBOYWZQbh5I-B{8*J{rE zJigkn>z74*kf+n>hvh!}nLi4|UG?M5*1zX99{b}@#BZ0um|H_z8)fL-51f0drYxy{ zWKjwN!ew%DlG)wz2xTfOEZpQl3Ne6-MO|kqm8>i`w!YQMME|62lh$ACjs#5tsoZa* z2K-dU;sM~>V~eHZ>Jc~}pK~Y;4UO5Enf`-NAouq6x_4%%iUDoxRA`BEpD%n|wGa^L zhyOGu1$JCVBqrV>X)KxqtA6+?pIfvT9b}Q9*=j? z=uY6{nWvx7@7;5057690{Pl235DE+5H%JFakgd)@!_{ZDdJ_3~2Wg4#K>u|p{-wHaaWTU5OBdxTGm#)-;oNBDci_N* zfy=BQPzy0B;%?6!@Afg7%o6W`ACpdgWVB2ifaM}hH>lqgAF*j9#V zn7+IujB>^ptVj4Eqx@5X&pd>)cZy3VwEnrT-0qn$K4pBC*p^jXtY<7v*cIkT*j$I% z9M6Dh!xlxv&-?&lnV?ObK@M+YthTs#=NluW-Ob-$cQ9JhUO29^^N=tU-@zAe0gopa8Ph?hOUcX z@=ikgE4!>VHv$37!>i+BlUJr?C8cC#J>1>xuU>tuD_LjB+IC5hlvItL$_V>Fr^nJC zM$t1*JZ>|1Xq7ck=5qid&GF_)4S`wdzp=Ko&5DM2^$+LsMt{zQPlHIi}o#@YyfDIB77gov(Q|3XZZ*g3R{{3Vol*Yox2cG(i$EZ0o6@odL5&&sHF&E@m_Sb!Q7F_a zeA?JAl@WV>n-3lz>TI=TWS9{u)6*5;oF#~z>hU1*yAHx6O5si_2j{Lhq*P{V5t4%T z8wU6UjYb0ux%#>1O*N^oJ5KBD?4;9GqxoDS-z98MRFquU$ghr$gw?r$R6X}3za55( z2vpZI4f`@*#_7F4gYvM&jBd_{)E-Xm$^F3JV5ZdDE{gtZ|4;}Cy^0rjqt8>d-)qv zK=pV$a26q(u=;f{1c(MnqN1X|*4#CEA%-;a^IMoAMr5dfXU}a+<-9(^gp`w$v$htR z6bc3x-rnB*XJzXg92_9)o!y(d6CJ^QMi*j9^E-UjwxX(ZsyTZH+jg&OuI@adqXW7v z8A{>y2sa*7-DVMb&Gl0{&3JRs`_!MFo`f*F@gc2U3%ux6^z@dY+29(%c2~9asw9io z-Std23bcCN3%tQykrt41d_hCyuM04|;uo8` z+A^dwkeMg?ZZe+8>0K(g(u^~AUSa-jo5b@PpUx$s(7p$g#ZBjdI6w?vpMLAoC@3V< zemI#|R|$diMBd=E%3c77aMzY5nBq5JE05{etR?hY_?!Ivx!-Ngj36-p4bi^PDONFv zY!IMsy*| zLxQ0x30~6;4HhfS#M;`WS(5{(Wbwtit(wg!9gT=XV(cZ08q&a7*2E4$0V)X-w(5QH zVz0Q)yShmWn+@J^E38xDO!j!)Jt1ad_Q0(>cDcf&k-WS- zE>F4j6uq+<5$uLPxPj_^7#zJ%h z_JPpwPz-+PnnP8v52G>&99N`l$&o1eW3DUNOC z*u8Rc*4l4!Y|^2izM?ITZ-OKX<6jNN>28m~Qar}05vYrp&_$&?*riYtdwY8k5s{KC zA(o6b)aXDi2cMV3lG4%@$X{t`Y5gY<3QI6C;&8Y`KY^q&_xBm5FC2Fu)NIm@*?my* zM5D`rw1iYNOLN?TEG{Y0XFF~*y*kM!wz?JPX4)JY%8k*hjF@7)8{p6&xTwAi%uw9Q zd8qX9Zf+C(6gMC!x!>h)ej6sj1ohM(1V}1#Gk~`)H;^jnul{YcH-HlR z2TUVa@;BD#Nf_-F#vuz1wc*t@H5n#Jyu*e)_4Q-iDdNIB-~* z#q#j;!@>r|$mZ_yPhDb6>CBRnl6UVmz^jrOcb~RX_qdr$TlISW<>IvTbh}1ZL{;e0 zb`#!Vwn+sj(sAbYp&8wJ6OJ6J>^U#*VsWJYly={wO&bW^}GSxJ%H$2s+!78#WUxg1;tGeXD3G&EkfDn_KiKkL$sLuRJs zmEPB`<1gSVkWg*2Slw9#R+nQB8U%cNWoH?+>QL^y`M2g0w(oRD$!c2gxP2U7`0JLa zzn+u6{YQBV+wHu+1$poaivSht-^YJtv+y@war-W%e;?JrR{s0x{{+APe;;ocv(6tH XKcsO>wuQ9)bFEL<{Pf1++Wmh49aE4n literal 0 HcmV?d00001 diff --git a/tutorials/nlp/images/spellmapper_data_preparation.png b/tutorials/nlp/images/spellmapper_data_preparation.png new file mode 100644 index 0000000000000000000000000000000000000000..24df8a8e0525ad6deaf7156fd2f5dc79f9ec9a39 GIT binary patch literal 75265 zcmY(qb9h|c_Xip_wj0}4W7}4fG`4Nq$;3&6##V)j2YW{;D@q~3e}xAF14ELL7FPuWgSZ9*18;FTlk+MVsTeP6~WoA|`s_&!;QWT7SXRLpc^ zh+?H!R8{}kdcL)wd!tp!S0VI#9mbCo>Toh&&YSV>yfrY z(0Ps+*NkwBB34IXLI`c&G&a3k#qk zv|K&vJ8OPpAnpIc_`m3obt32LHNGe-!j+^G;?|NSf~#$)vHcV9!` zBV#Fv4((1lY&^x68Q(m^I zJA4=v6UrJD`B1gEVsfRp;$-0Hj%$QO3+gE%& z+Q|uSwkAspm4tfL2{aWnm4$$WtS6W+^4yDMBigiBME~UjjGWqFx`R;7nkx$@V=gNe z0vzkDqGrqCY$cQ1Z0zcZdPK0%8yVRw##*#IOtBiZWgXT^c;bpzy8^2 zxINYB4|VnCvmDxlsr|e8fr|57R+JWt_`fp@hVFK6@C#ONxw6n^x^Odd@yb>}p0}Fv zr-okS5Lm)mrc;e*#>$8fX>O}a7PdLKVaMdL?)oKE)jJby-_K55tB}T~syYO7If=(W;jq!!Oj$eGWrAl0m&{cqb)RJIue+fzfB*aJ zD>?ad|EotbX1eF&{rQ-h9_qqU*VO~Iz;BMnp&@aMc~w8M5T^#z-Mxt87<!&UgJq#^y^N)a@hKfA;6pZ;!|0NqZq|Zoa;g}9N^;WG)A{;{RQl`|uxim$r?mf0 zmF*|#2PD~WAKQm~5=mgh zlD=Z7n5qELS@lnA*A}mVKqN_%>H4euf|_Q6bgWvc+KxbIhc_n`_qn;v54susC0ME3 z72y6vXK4!ipOt+-;)4iVx70;c(r-+EXU}~sHu8+K)iGtCS)U`~Y6=RQY|Y)5Mo~ZG z5}5p9Kl;)kdt#!Zb*%8E<)o8;>A4>ZT_EF$@-dyVlttBZ!sq== zJ;leXsw}Eus*RjD?Fhgl%jXZV)g#SnE7O*AUz$0&*>@l=Y(Z0(gnbU@;#vOo7E*g< z@6j$9yn-QX4VEsldpO#jG1s6a;D?a|yx7R83tdTZASU#?2NNl(m~UXqd-k`rwvB`B z;v=hDW^8|pOsYkc<3$$bQCU~&4X`%1jf-KS=fkJAdC67l(#rSn{9&yrd1*cJF=tfv z;_sFnXSp&ht*O4#i<#4|mUq3Z1qvK(SJi>$ znObiCjxhz@8tSGM2g~%E?xS2*(cS{a{eX zEV=7958yeUwt34L=TZ;3y;x++-4)*3*D21&EALL`Za=nb!c3g+qvst7)~1%bL=S$` zi(&`P3=+>zTdj(XQ3{>X(J|M)?vT~c7&2f>u7p9vxclQMZ)}9|HH>%KW7m1tMB?Qn z6ckhf{!dRY=fJu8Ta~%`Or7BAka~_oc~q;#A_GauBT%YqI`FprSh6fw+N1DAezz*O z%cme;uC-UYo7^i7lbI?egxQyr?}}XN>x;F$5B~D2+sv;F^?Oj;=Jgq0yE&)W2`+lH zd1tDX9g-L2(=np;+1Ds%W5>Px%`@B0{9Flr9T_Y13I=+ip%yo3--j1q&Cr09q^72; zb753C(`o!?EL;SOYP#0M^mx$L0CRBEsGT791A$3d#qe{x)Ldhlz=tAlZw#m!=KGno zh>JgHaG1BA#p%}xX#)}oV+WP+Q|k)vXYta37;*+@XPNn2V_ZBPbweu(3Cz1kjVod>GKs#gR7Hp)qD%o z$u!cFSHAAKxW$wdyt-?JPmYshX=EZt7sDgLl{JkTkj~iON~=0ngP)L1rfta!`o7EM zbwiADU2!yfaX;OWLuap6aHA7>dQm&nb7BD^R+}}F)G`9zVFB}LTl)-64>B~f%F0Bm zW!=$LZv)w*^sjJ?d=*2|^9d))t~zxE<@v$HLR|Ut&6wX>t-j}{t^Jk=S;t^%{+_gv z$BIT+ZF_#Q*hh%UJw;{#2Dl&UW?Dk3sI)$#D|f548qJVJ;f1Te+apHxeeR{lu&7=2 zxDa)=t|;hnk64Nef4t^5*#IlG8jNwDk4W+5ZE84QuAVBb?WyDGUm8oPvh=TL*sL_Rharv`-ZFt8zT|Cx5;H z!Wh*BVZ(K71cEz|b%hvBnl6y6;81Q#-q7Wt@&awqbQiA(STkui}m>x=C^yzl#b_6qp5K_a6m zOf7cAtjyF{pT-$6;nr{mX%XV|+U_#&%pu49d{Pe+KOLi8^xJ(sKwY_l&v1J;KW$qe z%myrLWNQ{357};IL|zSL&hc<3G?^Z%fKkCl?2E<|hOWCh9OpefSMu*O)wErXKPxPn z%cS(%Vm)g7(o8uH&wPjpt`sX^Zr9V z*y*VoA@k^q$H+VFY55Lt{42L$2LPxY|Lz~+q_B*(VM~LRgs@B`pVjy!pqW!+6zRPE zx6{KATKB7m!+eWZhE}&{Xd7G?gL})XU{t?-I+subsX=tb01Tj z_OChuRwH(Z6MO~p`-KUbPR{P4DFlTF*^k!>=Q~E>ZgZV;pjYBNoYT__|hV zOMHzIvIN7PsY{CU(Hb~esw=fZ9$fNy9AVM-_9&3Gy{?;5?>zw)E8}sN@_BLEFv%hWcr_NV&o`)C#~_%LnBTuUb+TeSpK`D>#6$-_X!luOfY`J*#jmz{I8t zZuh{+|9Snk`RFb+1JikBY3Ab_xqrwSBel{Tk_$`{lPF2Nv!Sm@R-_fbG>9la_?`$Z zXazTZW-ZUn$w!T^n`nyw%d{Do5&4+Fbin5xl1-5`=}S>m%D+E>9Ej1KpF4Pum)*m# zlAvmL!kPtK5E~008vQXSDi@#+R7ohU$=;ss7Z*EzNnqyWa4pb9h0^FQ+9z>V)~~`J zAfsifsV-ApRkB=F(b%$r-w~uwOG^yK9tls%!uIh|O;xkBwyiF$D(LH8+nZ0`hVngs z@qsSY!_Z8$>}p~uKV`Fn6zyf$=>2tFq*>BD4)NWzoqDT@dz4v!Rxrm;zFekL3A*aB+f1enO_TLDukmL(Y zPh+_j3%+eQmhXQ}Ndq88$Na@c!yLTK0Wi@}Wc7Ij^(eDgQt@doMR4=-+`gbNQ;j%$ zS#sd%wF`KN9FmZjt1I9?n48=#&Ij_$zGONZaSu_#M@fE#31e?4yicr;<6~w0Ad0ej zS-@Yja&{Ss5>`@8yt6nyAfc+28T^6J7hpf*-e=GA0Bq*tOB81HsV;%nB6-Gn_g-lGzj7xvs=l?12roV%G z?^MU)yceLu>=xM2b4y)_U1)M^3y~E>>t_+&#IHQhusoS?hLIK9`{lIhh;fMW*T`k< zfik4o!QWrcaoO3=qEM(f)&}W2FU*hwa#X{8Ft#D6rqWDOu>WS5g9q1UET?iI9{h+;rHOWz(s% zm)?%A&RY1y!@WDq7BWGPh1uQ`djCN4Fnv3b-iHzld!rCyqQqMr$g{W+rNaBMzuSre z>Jb-Cj?hq0aCB}P5Iw`iRFl%u$vhcCg**vXGN^JsG2SNvzn}&7cmFtD8v`O@`7p&f zDLK8ay2ChThQ<(hH^gs>%?|>oX+!IvrfZx6#()yW8zqi;D(PBlw-;HLC8w4}#IKHu z@PFUq==*|&>uGbShiJOsW`=*1S10HZb2*ZcrR*Wl2B@p@Y0NoeRg(Uew~BmAQn$&zT-C&;qbIJ3{iLH4zH^9phFsYUXLp(@YV;(@U#R!LD!YW zvMHugff)x%=Td{k1bQA7a|NR-ibnEQ#mdL((GreDV8u@tJ03+SmGc;qS)5+27zI{WE(XPL&O!dp-0Q z1!h^B9l*Z7jB_d?z0lZOGM<(Ld^()MC$VV)>zT`g;Km-r!C~2OF#KX~GBI0!nwIXS z#gcYW^~}G}vn(&@tr(Gbzh!&J+u0c4_yFVJnZ|+ivN#)hW46g$=FJ|!>p78}0rFW71)^j&exNUL`oN0L$8A$sVAUO3yCgCEF%??^4@ z*3e{j%oXUQz`L8z1^mig?8SXD2NcVPeF?KT%&?VwW_@OV0eD>VsLs1EP_0G76ZvT_I71Am-pe{Y8q4 zWkAJ9-zgc*Ib_NReEUItW#`e3oivsyS6iGP8*jdw(ggx5H~4*opFOR7!N}bm2r__c z7Dkc8Q*$$^B#hfxI}p)YQc)*YyHXhdH6ex$x4onp7)5Vcr*{A2(aJA!<{URu=Ju zdP&lGb6=cdPS6Ryg38?cBAu<$(FW~HF4TH*5XueyQiccYyH+N8hVy8#cC;CY9-r!< za~_7bM>x?=`35!ur_fP#!Vlm->L=X*`YQ#)axS=`^0w=6o9TFYzW*A$^6E?>>2H5J z<>WxM$2p-ws@O)yTbg5+l#!+xTD>PWQ2v5y;PY4MrR9VqI&>Yu9<7$Hk0S=dfq3)& zN%&ToVgom~Gwwvc6Al?F;{NEkbxMf|N9TjQ_Bvun$|MAIdxQ9S#99-7)BL6H`Zz^8 zjQ-Wf8K|mH&|DVhMHuM}1rl3!!JOnq9|IxrOt#Tr+#GihzyFE+^8jI&hL)Bo?@%W`GlBLDObxM49E2m@yTBo z5V_F*s3PFd&^5px6h6emno|2>!7*J8xlUi=W^Fc=$rzYY{m^uG_ar+mF{F65!(`j6 zs-kr!Bl6Fj3GkiMniI}D9qig%9$^X_@_N4XgUdVAbOH%#*F4TWQlm%B)e3lORxTfZ zfB7gGcvx z4@-zNm_YU#Z%G)+#&Cmg8@G{@Z0(A?2r{g`T>K5VCX%A2E=07{>XCrzM%nAah)38` zzyCyu>l*OiZ3%&mlq>=431_feb#11981Ad3pvktRZ@wh>F;ILvA!2CrdU#6rIDk{I zHvgi_*Ps9*$zvZl9>M#+RM0)y8ybDdg0;4VV(ihQZahS8TLjyKU82vy4KUf*Xg{8_ zBh;9$NE5mdn1z^1b+nLoyh&1|cN|T6KVBY-5IVN$`55n7|aGhP`OjR|n>GkK!Nr>Pz10b;~89M6%LAxiV zi1K#4BK_HK?<4*<8s4(8w>ee2woZ+APenn&U@I5l_L-NBKXb4YcJ23V6IWnXTz)H;dhBkaq74&aUsQmJJPJh@cWs-C?Mf)o}U@84|F4f3{ zoJcQ2nyR5MIyd$M_D_Exg+UA0#b(tTnj_HZC| z(@P?t89*oax_fu0PnbRMb7E+O`ikGzs6R^pM$y3U>2C-S7+9FgeBX3&mTQs7w#nOq zDn0>=khS%4`Stb#GSn1`#Ii9eC&})xju6f(! zF7`1V*~X9^e6gxp*urm=vB+&h8{^|qU-U8Q{whl4MT-2l%2d54MguZ5-fpAZXG*e7z^U<| zbnMieGNQ`^EOwz81l11k#cC?U8ltsKuQn|n^#nbD&lv8HVZSe zb?&n9=lh$R)pZ@YmiB-LqP5<0_)?itlqw0FQb(=zwVa|bK)d&7u|@il+9)zb+MnWr zg7%u^td6cdkeHl+_;az>-o+KM!NrB>)xT@^@iTRT2Ew?Y*r8fK_r4mk@(T>iw4?R0 zFcsE|k`}c?te&EQ6uexdTbK8Zem--{Y7$`RVFAlSF-gx_gk0g^ z)Cq*s=y_2F3Go{CPqJZHj`;qB`g$6{l+FrlZrD_XwFvU*B%q=b3jo)4%ML;>{iw99 zj~nSMd&m_cV5V$!2eX*z!P=Lu$G+YX&@e*_pv>_7;V4xwbbTIskUVJFyE@%!#=jS& zlS*)uI4&^N5jE%uea~_=(xRVVi?4po=jtL4?veVEZGEO=eM(MzCnHK_ZFTREz@d=Q z4NvX!vQZ!VspJm@RlDI^)X{TQxN3rg1my>W5qgS^b(Lc$m(J|jT#v`!N{2%YyvWH( zAQkP3h1{(y&WD&n8lTSOOim3Wl7RKl)jJi{1jZ{W{vKoMElpi1k1wSAi`$kbtjeNE z1TqFNi_M(&k>p~L7@x~zcG(*neqS z^e8brM1+!{ut@pNn5?Yc;W94P!1hyBlrFo&zH=3AcUGc4&~Nv7}m_mBGHP)UBWHYJr3i1ufaYm>9YQ0CPa8a zbP}^p2_KUEyr(xhcX)V}NZh}Oh-|7>TdxpTt&#z2Tyd zMe@l?J@4L3TyR?7$Mbj&gmd?^RuO=3ymuzV3#IpwBAa+AeLFV{)pBibA!mbu}eR9BtM#p^7Mm&CKqI38;5MDJb4Qhv{X*p!JoZy^ee~ zAw(DD@jwraePT`a|NG&QZ9wHqkvBXt71Xq<*DX3fZv2-W<#S^2c zO!o~P-PoFLfbr$FbwM603tGP54HD{$i+Qb0*-X3xZzkS;j!f6wuHG-JFMgYZ!xk5@G7|Fm@UD*$ zFBezl6#Hk1XD`?ORLVZ0At#TqYkFT=;)M1UhKK?y0K@vzCgU;!ApZH2_@-6oK5b}n zvO0UYAdky_P2@?j05jBgDvLKHVodM2QCYKGco8v7l$Mi8pMNy|TU&egT7wtibr}pk ze1x6$SR=T&pw}L3>9 zqWV}$=pB)pP6%&oBy4)>Yj1|Kn%3uYf5=7~%u7OO% zvFnwR&zRX~>pj^-BuJ}9TYRtXrH%)3D`bz6C7VEqspUr(+$4+>56|%5$46=fHYbBh zcGt34($A14bkY9?Oz8ND1bZVTMSS(=a9xk98Ug9hHwh4^$MSyz53=9%+3fI|dez&x z;8A=$%UECMMuLqMT~qDa#WrZ^dH+%eEq1aO<}y}eW@^f9zgBN?jfNiQsiLUAY0bvQ zMsaeI1POzj8;URFj=M(!6S~#ml}qsg(CD3xk-Bpz9HWZde>`dER&LQHIbJVlb-Xd|h1RLL-f(gAocIF7v4zarlw!x{Waz zrHH}s(B$L>KNGQ09!|}w1$SJMV)B#Y1Nex4Ycy1?ceWI`BIS)>NM;p=?7qf9!9lNt zhq)5h2W2xPmz7(|XYw}Mv=T`}D~K7VVMC|Jp@fR6Lx21oevUET46n8P6q&U4N+@KU9|%o$xYw6we4BvSwyO!s|!;2 zy`8jL8qIEX!#QlSned3ps_N?(OCbCTLx{s>T)LPO6Ain60UC?_UwZ9J8m73;?|SAH z2E6U4)cQk?+~aG@Itdp!!qeglvyZ&K_4+Kk%L}I-|817De46E8`w$8`^7(r(x8W5d z4aHGRE*Xym4^LJgfsTa*{yyQ^CJZGE&+=20Rz?@o=dNh(;*`pM=mwmXRYe|=>B}P0 zdXv?RC)5`WR-?V99>0!6+n15Cd`3^HHU>sS&pb42eAAf8_1_D@w=NMdX(re-}YL_kbgk;1z(r=)Gmfiqh zo(1-`^gqa(0nT6EweIrqa!v>XuXx%$LILyATnA``ux=D>B4HS%l%K+c-ql`pG`tV zrYx9!XIrXzSvYv2Djc7xVmWUtD5LC1PcxRv^**~t`HU@spxsB*dT!D#_hC>*c@V}h zJVheoqZDOjP5H6FT>G1wn$*-jKG_+nqbcu0S4wc;!FX#Odci>xE;6%RLR1vudJxyk z%d1C}W7Zck6U#CI#~*Q6^p!Iiqz+cJbW~~Cxv50x$!OSAbfiXV ztw$BmDVb%#B8&2rV15(uD>~xk23>#i%gV-{5umv@8jovJHJ2k)!TYOS20TkpWc#|Y z-WA!t1Z97-uj)cMu0x91`C3SUX}}e7wwTx1fyx&se~fP>8j5f{lQ@DhY`@lKZ)^;$ z%HG%G)Odfalw{|e;fsY}vZeJy7MafR_X^xA@Y0n2c<$G0Fn6AQ=VP1CF_n)&9jpIE z-E*7v9%D{&k;LK3%EFO;D$JqOXq7tM@tz!U+2fOkHqPeDvS(sCEMx;7Jj5gx+jomR10`mUPG z-cQ5M%X>68m}j&6?rN~#>#ebq#^}HOhIznaNQFl5Z<&tlX`(l0p7Ir}?(UTBi2((mM0 zbTAe{2Z^l*mL5hR)DNZ1^Yo-3b7;HTWWPIb2#Q!Fk6jiyy!=y4Tu6k{fMNw&JPixS z3w1p4h@ovGGZD_uGyx_|TVRddMY+YJ?6yDT$G6ijUoUculhZs(9YyscKnD2URJ+=- zz4&+0w_C+>qgu+RVCu6(0QMYc#+)s6<8+NC5Q@qq+NCR2zc8^xk~)w1W)ALxjiaHh z4pCA6Vcc%iDH&U?vg)$Uyd4OmW7KsNk1exwH#Q#7YwgSB^Jmce>UAh9sZA4=`FDQ% zaktU*=V)B8GGz}h9$s$r;E+zJC-CmCcE_JvCCY6eLFN)hQBoE#c@`ythZbe$bEDkwAmU zG(;J5{L7Q(i7E=)&TQt;LzHq8V4ISa{cIP0{%lk5; z9Br7Z-q1J?YFAX`DJfZ5fA1|eT$YOxh%@W>H|9Bs*{ni4vQQm(Gzu)(EseblA0p5# zuvmpVPu>}obtK%De#^j$LN)9`)9cUm{1ga4Vmx{GX)P!pXWKgGvX#gSoPxsa!4V-D zz-!1l2qZM}h(Ng_FF=(vO-0y)o`s1V;lq@e0CNga2k%{8Ik{$8S@q&!lY#%LDMEu{@;8MHD2f~rYhBP()aM9lg1%w@)43n6gRJ1^_$4g*gMLC z8e?8sTBoDX)Qs4hNDyeuJX!PA}3($hM^~02V+?>f8y(d*y&Nm!2r0 zZN+jq@HiF&@NIjSVX6gVW4~(x;{lB9>=82twBM0EYLwjFiy8Q%73g|VpOclfSoO^{ z=So$D8}{~G{`?Ui9;O6Dvo9!9Xh=*AId8k~>$kwxYDx zxZe=Lp5Nt;PsZ%nC<08G0q~;s_V(X>y`f!;JmI3=KtqzIPfm`e3KS~azPUBc0597e%w8blPk@j#70a4uX}cPjUZ}xK#B0*;@FYc3c%iANI6*^O33U4sPzO849Myw-%Mi_Zr)dax zq~c+OH^ni>D~HL_eDDZ+*IUhTRo}$3RzM(naO#lhScw6Z-j+(@ zN&+<>QnT71U>C@1VyaWhx=&>+wL<6~{Y@My5vy_s_2n%L-xf+!qW(1uT1rGSX}}52 znIRtpAXYd#alSR=W8guGavX%mxt+Jv6%UhMuX`X4@0<4}>DcNf^Q= zQ5*-yjbuaq8xGyKOtd7pt84U)4S1KUYt%Ry5(*8=M}CmB2{F5Pj~mb~tTjWAaB}7a z5zI87rf~k8wowGYE^hPwseBPT{{EBJ88|>R!VIQyxC-KW|LAuB)_<#8i~kJ9pJAJz zV&5iJB+lK);mp(zZus8BQ>Cza8?E*_90SoA&g?9fpD$i)hCNwALg);FXeq^IrL@XL z#3=dr1WV_-BetGzU9laFXvXB|CgfDK+;YBo5Nf~|M}u)X0;d)3qLM$5Lp*zt^}m@8 zeL^T9BQ|zxNJbI? z=PaI}oX;03Pwjd|WqKoc8iEN?2MHrM7WOwgtH4sB3v07FJmYK zuQhR3tZQ4IbMhO(EMu2&lioF?6*5v7E=5+Na1EWN zzfWp)E!bEXlT#BD<6}F&+Xp3eb?wcmL|t5**-VTZ)S`I`DrCzt0Si@t1^Kgzc=Gcy z)d}J)^FR){B47g5?m-kei!mPsMFt04*CotYHg?_kTw&r9gM+BZkNJUHF8{7ej9Q$4 z{XOGxF4=g9n>29utuL{yi>8yB_Kd{wlw@SzH9=}LSy_4IeYK7T>*?e>#pQ&?IS42Q zBJHg_c90~Kq@sq}J1w@OJjCc1!e^uwEz&236yl^3Z>-M^@SjRX24i}6v zBH)v7!?Zd6+Wak<-6@@E+n~!4xt(7+DKQZk($mqgXusK|h^`7RGBT??y+MEv=dkVX zkwxtPgm}yc@b~X=Aqc23%p)RTNY2fD*cH1wo}uWe1FaED$w|%x$m?S{r`{5#1%PoJ zTrd(Luh%7R1+-pugZ?u-+~SF#)Z6X%b;r|w zodBM!rr)L$b>+2ia&l&~IDM_hfdf)FixctS3}+jJP_$V>2D}@;efJbP;j>=w%-L4O zDgPu=&C=a-cFwG?uUE=DC@N;vWWqBlh`PF3Q7+R`l9z1LD$_n;_yMB7@WKRRlau%E zj`c_gXhDN=R7|ln|0rQV0pc6Q2;_=6-eqJ*9%9!F9zC zsGcMGmh4udDbB3VT97VgUHb?8>97>O>9ReEJ%EJ`Io&cPzR$G!DyOYw-KAlFnI*W; zRV$X>Qd)-(&eb=ek-L*djQ2CAf^EB5Yih8Vcb{qGvPvR;C_V_uD2pt!v^SU36$Ufe zC4D@exi~O|Go$pPw;7=T>64&Bpt`dUli>Y`&+>D-k8CVaD#@zA_F0gDnW1RdZ1X_o zS7u=H?&~{#mb~T!sv8j9(wa$rs6X_Jqemu@HJoI4){7vp&Z{}@cVH8Z^f;9UTvcI%K3tnSCM$cnCCLD1r9 zDM2#`_g(?PXY1fO#!ee5=L`6(EHu<}O3oaU^HTb{Pv|D6Fl0%kUK~f}%H#i!zB$Z8 zn`{f}OZ&)VaogUQBagOZ9j@MqbPCtkLpwU+4nSc-mm3t3X~;V7ypNmUV=o{^D_}zI zr3{EQ6Yx2u3D`tJ(uzR^Dd*Sx(@iD8h=u&7&RA^wCXtcRr6{CD3T&OX_@gF)VI`~l z4juI%@ez)DMG}<@HIrM9tuY&_AL}}-oKAWn4&Pr6J0h1S<}x-ivX5|maQbIdz&r2V z*`lEy7j?ib*<3XE`7_k4ihUp?3?Qmp!%>k~!FxT|$5CZg#j&LupOgZdqt)lAx zwm<TnR72$H)2p3QkCCqOvjJ2D{|+T`r%4q* zLTm%d`07C7pJa%>&2E!HqalqwOjb8+2*bUy1Kv%$ZMAhc&X z+6xDLS$tzcOXOM z(}Mp+L@J98VVKBDccDC~Wf05==V?kt4nZh@sedXio&!z<1W&5%tR+Xq+5MSro$AFW ze&8FM4GdX<7#J!H1)xtB&z8IdBqD-va#DWRL1+3vyW2f)ZRkLJ9p44O6FP$7M@4O| z14!!HT$-Q9s9=ErVdP(9W07$i*E9VfUKFl0q5NUzDfFV_L_nJ@fB1I7P=pANqR?ki zi&^7HD^?C)^!yx}%R}hB*QSgtlbARzDqT^NCGx8>KYy&Xqg}*Jj9qP-Ei4SIT%;(L z1at+WfzHAHdwTQ`nGYVgL!MA#Dk>@p3gp-BcykqQdUs#oi>;HBlXbueY4xnFu|d?U z?ZS4NJTo@d_8%pq;VSo)O{Jt~Vw&3&=$}%ciJ*`kGbN??kLlE57)R>$+GAm8dIKDp z%W2D_qM|@;DmG8F{N~{3h^p9BQ&=iGJ$b^R7L$pd4<<}$X6&d@3qHa)WujDhA`S%e zjZKUtk%=uGwgFX0!W+J&Z`8z?*~!-unu?k#D)kIsLO@A=S63NmcP&9L-+Tzo!u3{b8;*)$$5tgoq&QejE_q;!% z^jS>SiuoV;5M6p1Nr>|#H5=no7?SIY_bjC*cLp}RB*Si_CFz0bwWtN>;r?1 zYSAwZdLW8bQeqzOg3V#~J^keN|79TVb57rL(|UV>B1~%9r~x2NNT0V^(nWD8JComv zgFwe1Oq)b=MLv2XhGfjq(0EUuxvO_D|3Ch50jY5@28xJIBq4IrjA5GynP*$CM39Ot zW=PjtPDVzCNA?N&-WX}3owH?*v*Ruw*&V6z-87D{APX4)C`Q-C1KW@;8XqPo@B!?j zRJmaP*K`8&Uce%Vs@!p+lZ%R#DDxy%G#n*_=ghYkc2Gc1 z(vfIBNnE?}MhuyDMpJ>4Uo4kF!c<9d@l@*yI;Kn#JxeMDBQvz$YJUEkkzv!UQ{25d zjqL~k#Ni1Q%EHb9`2y!Bp6NsTT(PhZ*F9bsEFZ1bd2>x!d!(MrIyPn`Jwl9ggg3ix$Ml z!OjcpziBIDtt(ImdDnMZp`xOzf1$eg4x1ZqB%eoVTAA@_I4#Q#cR+yUtMh>)2rlV6 zfcBuO!_M-9_~Ckf2kHK&g2+BZ7l5Lg1QQ^1aXf7}vtD-_+)d2q#C3fWN@6vMY+2sv zVgCh6GE9ofrKh~zhYv#;QyC7D(EIDxuPIvC<}*&t?grC>+|Vmq?JoBqlLl6z_;R~7 z1-Oc9n%Gj(m7ZRz|I2r2ivgX?O!-HH2PR~+V_zTXx5*Js`5T;d3aSdiEu30G5YEIv zIGKFsDgw6lb(J=RIzF!Aspb{^xY##k6K}S-P6DB?WaYL1jR^p?4jf!N9v%q8f{<*i zjP!q7X8{va=UOx45uNpMM@wrRR1ZjSbOeRyD>yibnp&+{rN=OP&?{F9?f5EIk^CSp zf!o0#nIuq?!0x5%B?hlIk;KL4wPc0%wh$K?Squbeqks!^1@=)Ce7ghUnCo zbj+l*oHB^0;z9`)`-&ip@>vJQfp`aG05nT~SUkdiH#t)XPmtyW2}c^wox!~MzN8@J zRnYG)5{{nE?hs;vfrFz6a@yKCy{96V{Rq9&Yoaw4v7-VIo1h{|U#UQzT4q zk7wLFKH*Rfq~r=9i|3F9+05J;4HRYI8yvKl6=|iBJhLA>e=Zj>(2ma zHb`?2@dC|4EC+^K+=Uwb<;lqukd@}hAMk;BEu9h`zVv7#fA4;}*WSg9kqTX5Z9QA7 z+!ve2H~#hTz|S;HZm+!(VO7!%$@L^^4~@B)1pXEIu--%{Hy6_5>NhNc+5cTpnCR{g zi)Gy4quYI$*xuNrq@-M~DW}^39@D)!5U@>S(4Pu*I$f)lvb2PS1G%#Sw`YS>_*%!# zhRneOmh%%t?(VrPwvB4(kpqFlv%(<^JrZm|Kj_#nA1=AUI>{x!0TOdf6-Z`Yq-osa zWKHt6RNxEiu9bUy^w0^#^8f8B2SA)i-uuxD@>21N)WP}vOFYVP;_jP1!(LpJe_Gzo z&W{l_aAgg(`9FUo%W(tz8*RcMka!x02@&_IJA5t1(xC2Cu`*MdoaT{xa(2U#LZ{tb z**H0U-X1ukBwJ)q&wm&h_8K=Hx`~H@dJhRoWu%xXg%O1RAbt=x>7m1ajE5F_Hm(O` zr2NafTa$?4(P4Cl-G5mJpa|elx7=Jc;Olryp^CqIq&*TJW*hi_y_q5vi=ZU_@gFFm zV9eJx#*r;v^J_!`ujtHW`khxUS~%#Sj)=8xx7I{0V%86mI8Gh_0r+rOSet0%^4zv{ zEA5lU%l7f>?USyTyqiXYXhY(3*#F~!V4G%JDv-7W{J!WE4k z+j*pnHxeHmd~~JCOKz(a)AWkv0%T{99fxV*!@<5j@ln%@nf%S!Czjk=8~HJUSRC%Y z=X{q_YzA9cxECEQ8}}7YK9|Zu^X-wkatPF300H3>cV;2c!Xk!TWtgz9W+CtrNrqft znL0ebv5O%>jtoV|YKJiI%pgOo<_{SdoY~n^dVOU)&(ofR8c7O_H#=nA-S4=#3ir@K zswec-1~~b91NeUe8+bCh@q&ggbrpaXDonSzs8VBcYC7J<58L+!EQt*azaIv9sJ)E@ z77aCSB`oYe!&`M|c$hB#RiA5`7bkT(Y4pa}IJM!1hO7087Vt?w4u6F*$Geh|4qHKR@kq*69}V zoAxZscubTbkJ1lePcbu3&rB;!OiU>#tTuOi$&5iOG}$B6{Ytp@Rom@RyY(q2JD1+j zxO_V_xQ$k||DOT@1p^i4Y-6+Hsr>SUY9{30LJIue7m9f$)NDOn`h(KaVfC&QV@oYX z%@dc$o$-kKhO1RM^3Cd!xA^h96NwRd8on^rM#!jzV#nT4(+hQnH|j)#LaYD#(X!Z)#zw0dH1 z!R9I^F0I;j(e^vyv|s}_iMzESV6%+O43)3tYL1h(A{0t#^&#H zRpNt;7T*9p z5elTt8(T+p2dkD zf!7iSuitrm0#}ihsoK29jupN8%fEs2EQzcD&{|j;CIpRo(~h)wCZSBd+`y<;5H(6X z+mNfpRlDVBwn1tGq$x3x!6$s4oY|Wxg{2kcSQZ2m;hRrZ7V;K{D!ZsAZf+*r+&pBN zS^Rl#(Zj+2jK}4cOF+k(q0HGM1HKIiVN%kO1^~d#}6T@olTBs@B)}WpLo_afd0^NQ z7j$-w$L!rHP$4a(!kXd2LrAs$x!Jz;QBeui-nBHyxwbaf-;X3N{;pzP%&=4ZUeBGLj0xo%QSI) zlZZy6s2&=iZvJB+APRY(d~S}OYV4`bdU)J7>#W=BOyu2gjRNp^xs249t!!^1%qOK(KJE;+9MDRtTIQ(xHqh)JohL-2_*DLJ}RrJ>xQ zGmt=#pjM9{DOr8BY3U-);^J|=N;6q?QCP*$cW1hmW#yM1=?2~FG_BT)I4k=kmc{Xs zgyMH{k}^3-`telGe${jJOi2cC90LAd6w z!^e&X!5j=+Z1|h4BSTS$DI4yLG^DacU2%nZnL-HhT1SiI}LQUI!&FW+Y++ z0}|!HPy7APq%@7h3`VrYxdh<>V!Y;GT2MW?C7=g#l!Eyr+ z>43pK@@F9&s$etc{kt638&p(eWs@;EqutLGxSD#sADTaZf<%0oWKqQZ4Eucj+PwK$ zsZ-}0DRy8Q!4`&a|J|a-^x4Urjdm_2qVEpwH}980p8%7#H(fcou}4Rt=ybKSHP~?2 zrybyV)yabX7lse2HrRN4m&7?9R>LX^2?FQLPp`CVX(JJHYPG6h6-rK^DR^-@_O$(o zO^u@N7@iv%@Ah>A9hJ~pH%7XS~2_l|c|W@byK zQ%THLt*+hB+S)74<%ON=S6ji^+4=&UoJe*HwHEE|?UJ9-d3oC$j^+ttW)c&VIoa57 zc-^^3Nc@77e{44OHXk#f^XO{@ltd6l*93~{=&atuV$VD7dJ;^+o9~-{8(zW>m419 z=n|CR)$0m$Wa*lWnUnt$N0Y=l z1B`}C)6;SEHj&Zs0s34TSj?t585zHo6%717M&b-{`NH_VK>-9ErMJ1cxrU}@ZwM}@ z^FT~S2Ep<0pP#VzB`yH$7Zn% z3B1*Jdou}T>d!5SGuDqxR$Wt5XSr~_e`Q9Bl&j(NcxWMM?z-V$1Mn3;1)4q~==Fc`Whvr}YH8J5XLEv`<5J@Da4^K=?e1CH=(_m**K9cSuz#mvau(`R}6N($$CzHm- zCqJYDLrOrP)6o?a!LzsLvE$-GJCgSP{d-t!7C?@NJ>6o4>2=(m6;znblmTZWqCVkb zi=)+Q2H)+UTTq$7g#nbTi9{(PfwJk7@Mgz(XxF)=X$ zf1mY*!`*LB*DiKOg6Ora1uYh8Q<-e^5$p_1Ra=IVdy~m&xg~2H%KyF#$U}yLPKPEDVhP<9k^LMKThKv1;8Tu(}Hi zV{gY5O_nwIRoSU{cn&Z3PGC~p%j}=`n*BN7-!g>&zZPlHDT>ID_2$&>7#R5U#>Yc~ zV5p;#lAvo2jKA-7Bv5{+x8t_A`6?^>WuXLkDOEh=i7HwVXkx#S^mxdcMO z_xgKzc@^y9n|;kL3keOeMge~fDb$gRiq_Q70Eq(_VPj)*ln?uDcKOmn$*dmF>+e@K ziNE+aUS3|RPzZR=ET&vgAoua)~A0 z&Z6^$_>E1IEC!Z!Tyk*O)@36V5r8NRA3m@rCep(n_qN^Z3p4(6s3C+_K#SQ-hNvyn zMwyy|X#~uJzE6%(ZpD;fulJJg(qN}LfWC4frKB8qI8dgNkf+=FA|c83fq={CwbIk$|AaumY+G4nhJpQuiu`jPzt?5^iUFbe&AAAmnk14hmP_*U zjh0PFstkvqVWIXW^5Hk=EPhSY0bh;rvWBwqOqs3#Jbd^_IxnMc8=;*~bE|7V=c=D~ zrV}~wM!v#l0F-t^Fb8-&lZuLJYilbi=o@Rg#;@p}^?=2+Nq~s6DNianjlB3C8UG|JT4VQIYsip7Um%xd+9nQOxJ)#%KbW~bT+<*o9U`Kl0vqa8LzPj!bmE(nv22lt>jR^O zjP^$RgRJaqv6SWrh*EQ}e92U3qUXxEPx&1UE>5@i_w&r=kaH0K+(Fi3n1$xaBE^H- z3jI+gN=i!QGA&#dS0^8TSfin&*b`s8<)o6leC2%0C1$f}6;;(z^`=_?3VP%qx$B%9 z>$HmgSO;x~gXvPXs#~v&U~jYE1M~BPi>{NqW5>tGu8bq=>+253^L{Jn>yAg$M_ob4 zcW}_FtPktmFq@vP5!jpOoM|OoTwGZ5b62}_(KmXXRydd)VX{4!WOve9r1F9nWf#=| zY{`i}b~`K=H=ZKBkt5p~NpJgcgaz$yTmVZN0wMQm(be1A+sQ01btd8uRen_%E>4I*_Pj@{}MTjFCh>@tM8y+7foDU z$zX}2(UzWm;y1t{S`iaezL+jm_X+se{m#qZaRMXMb&ba~@CA=Mwu&7$TiPkGnYT!{BUfNiR^%$99p z(cgMQ_p(n5>X-0;s!kGnK#hovkX;pNIhgip9*~w6gQr~(5*LTNd%9@!e%A$#=9x0h zg%XpA{%F!Ix5kUhJ$W;1Y!+m0w*1NxE z3%6Ig^@gRj{8&2`^BpH4BwPc}_Tgt_ua7L;&i70w+31RjQ$fgNt*&S5*QM%=hJ&6Z z>@LR|0b#f{58F)==Vhz{X!Cn>*}WcQGW_b9Aj+a|F zZN_r|_Mvb8^a-%qqa4G-!+U#s^>_x~zRi@Wv)WCU6!5<1n4bhk?&ZM>KEwR<^mI?p zhxnBYKA%%$(iz2;A6JIM30}#ah4T4*T?~)h9>+M}2LbFs92m+=AW-jv*(}M-^!kIU zeD(bFpu6@ohN{XRhfF2|P!9Tg332fODi{^5R#zxp-&&<|HRi`JuCA^rXw)h-KbX@q zf0K6rT#EY{l)r6+vR=QX=F2USvjbBp(c#R5mO6$0||$h?Y-j8zA$ zmLWLoQ8yV83*e}!SOFea95&Te)6Igf57$TfMki~#eH<#XpADAE0|Nsw8I8(R8lq?@ zp{srjB~@(nJl$Wx#0n^7c{|#tb?R1BS9VB!P=okhFA}STyOd*>ru2^IK*#JiOy(2aDG5 zFOrgycs#CsXihh@!;WL8@ii&zZdlARpLX7BVlEEM>5{&|H19aN?G48d*^?nAAsKUL z8%jU9Kg8tl`&FV^)iW^>?Th3zlg#eC{?1@4yEmNo88bj`&iP_{NX=*mfb4bZl?FHL zQVCgE1X|Ja4?NFl>@L^GEiMB79~)eDh8yg5XtnBB&o2veb4gAnsjB0Z%QU=9@#*#6 zWf4*dHrd`)HZDC=-`||TUrEmEgM9{^0l4=!J1(%YefXvlQ214?Sh6oL~7jWw7-5#QVH`52njV>LU3q32yi74`&f;yi6N@Yz+)yvm)Hh zR_O0ye5S8=a=Y_>zynROL^uOV;1pA@v${B9TAe8$%ND+Keeh4c-kX4ULKx1{fPxYU z$6>!(@whEqKY4b009$7m4!5{c`>UJdl{SYbj^odDmdl&mZUq8GM z>zN;9;hTlr9*&n18IAlu;z?3Mhn=Z;N{^J}Sh~Af1D4+L5*0i8_<0n6z8CW3XWXROkQ;K{qyjVJKJR+vo zN^Fk5e{~uj4$CPgV66gw3!>t!i6jwlm21m9UrQBrhWDv>hFXdGehWL2 zz=P`u(K-`Eg}ZAinaJ>M)${cz4&KQMaOwcRT2S(l-HwG^s)7gx6 zuyOn+f-YPeJ}*J6`N0gjK^8Io8zs#t;{J&VkM5tps*Iv|oNbjxVBmUQ2%^9s1?r#K zhPLD5-VP@}TOH}Bc<_#p^t-R(qmcgYE1tIdkUSz87G{^_DODLt?S8u!M}7Brnv$Ao z!#Y>5>(kQ0t^c(1yS&k|<;O&x%uY)rqIGuR*X$20KQ%^$RAh_m@z1!YKmC5w^w8ue z_=CSB!)UQiyX_a372_JM7%}OJJSz_Ys|^)c;?jKbIOMU2)nYjn#5(Gd3Mez55!;Q~ zN2|&;n`*DqCmwQgbHMq734gQiEhi%IXXdZ)g^%yb)8niS1eL@esEyv0dK>-OU%Dw) zEur|~gJQB3d~ZYiG2^%LhLYP{pTw{I+-Krw)US`1%znEWQaC}q)=ow7O{iyU5(TwU zz4ur9?;jugIqTl;hfa~dVkg36xkX!yijUtpYT#qwU;K#0_RIP!s4k?zc^=km0z0B8 z-YkI!v0%V2%_fJ_bp`kPOQ?>(@-*Pc4Kb zjz}bIEr05R%V(jp!Ce;}7a8dldS6=$=0~&foc?51t3Hy;p#^Ma4YD>URH08gQ;N)8 z1_nTY>2r7tMsI((u~QmRKL~+!3Thz+Qz1f@UBIGXL{}4@BjXad0l`8V`q0M?Y@O6-BF&TfWE@- zP15_YP-j$D_H=*E?Y58dpc2fcq9a1SPLP+Bqy~$OEK0kmA*!a;C+AW>Ru%kZ^F6L{ zrPgrnpZL9vL`r@+2COh*8mS?E+K&jn?z(d0UT-0au(2xPA=fw@_TO>)K65;HBETYH zTHUXmXrI%QzB!hV&zHt!x83xEL7%U+FtgBSbn1)vqg-~XmFpeAc+P zBC`Fr5-t*uBK>Gn^YcGC@2_SPn9b0x$D4(Mx2TkyTwKr@EFduy%Hbk6H#U5ryR@ut ziWRH!05|?teE}Yyr>nQuu6lB31c5u({h)Y&e6q}=dh6-4<&E(hG!d1R|>)CQ3v*1UtgYm zB%w2yE(!{Qy3je5nVgz(_INNcF$r4ChfJ)eg>nBTOm3~qS+2VAe7v+hl-vZU1x%Gh zhFrUNTJVP$gy^VQA_VtxzYEq|9WFK1DVJ-3(cM>Yp~dAUHyHhllO$X~1+X_bI5>;T z&ET}}+U$?kRc+H1NUpVTJe=j*XMy-OFfed)Mx)b~5Zu`-gM9;i*gZCe=vejia8qYF zVR#lra)^`K9Fcz|%Q!YRhM=|Em-&a1mR4(J8ldtzcX#*5uU{ccHH($R;>fc%g%mV2 z@_yKt^0-MVl}@KKC}ygJw?b##nhKX%sCTm4O?h~9eQ7lu+OIM!tU3A8j0&r9pDhxa zMDNCt#PrD3c(Y4yJ(}_Q?FXjHIzZp)Sc~?p3kA;x{4^R#<#gIbh|*|vb>97@8`yua zJ(wup7_2Vvtljt>>OFZgzN+x5!>expmTaH#OnnhH93s;^n`lyGQfa}+lBul2>w zkiJKhan1;cLfkC3gyC{v z@7ozd;pS=`ht)FIqd9<1IOymB%kc9O`b62?!RP|>;TZJZxKE)wVSS(AXw3vS-qIMa zh12`Je-N?@)_z#I-^zH!=Bn_ffqfk#y`gcPrO|v(?$YVB3Oz@sKC_^x2#x(M;+Fv@ zCnqM;Nzp1#zbIX>GiJ~_9*T#jr& zn;uM;p8#$wQy{=;g>^WX<7Qr3LSkbG;wBUakIr_qRLKHR)8;sjNXe{bV5tZ1?ivSL zsl21hT+R6C>iCOJyKUXj*w`TF+(k{G;_Uf)XlAAkO#J1pYn;|Nxc3sx(QBvJB1qqh zttTTFZ*&QThlYkOaP@=@HxU`6N$%rrO|l<}A&t$WV#*;3e)=Ze;J27-s?U4TM$$hxyfaJ!E3&){;h%1$ zl3BPN?O|b|&aTebJ)R%IWOa3UnSh~TXu~V55TUQlMOsdd{)}gv_-60R@};H9`bz?QeJeZ zO!F(XOmK?rf?mev#AbnaqbjY=7lx;+?Q|Y6+T=l?H$z8%aCeMjem+|yTzI4I?{BX_ z@0rXmOu$InXX8-n1rFxy-muK{FU_yo*fc8Q%Qc*@_E9-Iun`bWrxudkBmwyJd(*MC zUt@NtqYB2PFgS)R`(mY0gf12H(UmWxmR2kGe;>uD$*%#u=LE0u_3PMt&j-KWkwlsg z&|j7dN%1?PV`2`cYp5ot{6nz;8}@q|esgb7wF+>7fxfqi>MCmMzG-YW-hsTYIIqKC z0dcO^kG?opQd0W$6&+pqeOr2agb@oM1-O4)k{g5)IPy;74O4GqLP`Sllds9qEuEG1 z^45kt;INvBLQ1N};V1;1Mu~bg(Jd0P)DXy}TZ2P(c6NIB+|Y{^+-Ka3h=fGcDBV9@ zw!I11VI(Nh;!nUkKXSRe2*7yc0fsi2RF0`ufdH^M1B#)|KAGkD`MFu@C2!CEf{O#E zOPuYxFxiY{!_&B3u8wIaN%K10LdmJ9U@{#~mav&^4Th4Kbt@{XaahbT8X@v!(rJyG zp`HS0Q#l2s30VzbUbwZ`Kk?4-dW@exCk#F5n0i1!FkFW|>2>z@8Ubb?CzwPO$`5oT z=#8XtTg=xADfJ;-`RKQ65hN#Iko@lH$pajsOghieXy#jJD5I77GjQTsl$6TL%l9AR zP*4<}ZS>Tb%~*mJnuEc{$HoaJWD)u#Rx{yB)6tCfETLcm2u{a6{(XpCA=f}&H#awO zc+rU0aBytsqJf)z?`#lW&t*-FH#u5G$+Uj4owMQeLWp5g?Sizn?MF%Hs4>$=?@FKp ze12YD9#BRwKU=SYub_G*jQF$Uwmzl0S|e>GvcQ<0>^Eepq`16 z5XdN5xu`$PN7Qey&)r0VRA2j!ygeHt={Z(wkMx| zcyz?;{@GmYEs!xj`YH2<6|e~cZ_%VrUgv)>UQ1CUU3LJ~h7dsmaw}6r~Fvyubxe!8p$Ii(3Vopq%!IDhRoE(+r|4PqELGgT5{9BU~ zc^UPhm?!HHxw&~zy>{p#c*Nm|S2_yh3Trz(OKxvHL$S|WLug2e^nsk_TWNcHXP0f+ zTxem@s@p9|TJp~6$K9}c>y*j7wFDz&6gld9s})asPMMEKEA|Bf$~_9#|97`kQOX~A z$V(&jvB1#K)WIjTg|wxnztgvDxx@+u_)dJQc(LmTj8ZiC&Jc*%i#b6lek7YPo0Qb- zyGDqN18^Uwnf4m-3VZ_8?&H zDPjS3#2?6{wrO4D6kFQ{`A_$#Hg}aPQQcs3I1@`;Cj`Xk_4RPbPn7s*`GZlu>D}<1 zlz*%RUzmEtU*b8?eo@KEn7FtXr)%Ng{=8u+33q!eK#5_7^pmb>nwR^N}xc7=IrU=G9#n<_itZE zt2IAFK1t%2AEbbxpr|;K!t;b4-@Q2V@{5b=#RG-vwO)WpkBJE|t67Q0w^sY)?y@E4RFFs#_n<`*`u94>34U zKU|ku)|g&wkH~Da`gyj^`oDKTT3g90!nLYsaJ^$NH38E|7dAFFr}6130EBmUcZe{# z=7`9s@zK$YtTTsGFqJ({{c=ZMZ@+z%II1G+PI;e_92qIpruhYYey)hHa1S8+#|Dt- zA}L~X8-N)4aDHVbEe*F(u7@BtBu*l(CFkmzb+RmVx1E}r0CJL)xuvD!B@JsS4S8lW zd!g9nlb#o17f}S&N&S%UBhu1nsOad?aeDtef)w=Vbeg6snVsN#o0ymYE)qg6F4vwA zOm3HJ*T2u)z-aPJ3}QjsYd)YSq>aw{T)T?&{GNF?`EU{iWa|Am?7xeIF)cLb-Vds} zYD*&FWKKz{efd&3V?^)loa=vCN&@a61#>m1)TyZn{Rm4!OpGBD; zB#L;ufv;cBVjF(F{=<#)g}cQ<-1ZR( zcM5lB7Y;vi#snTq!Q45P&eoRU78@5^rWr+Ch=3u9@l!@#ktFyFl*vNvQNYa!nPdV1 zE$wguCDhNKKRF!_>)nr%Dk2fVuLmtOJSYiZGFGvYVmkfXg{-WsxuBqaEiGW_NyK1b<>dt>C8uXq`iE)p!6`UHRq?PlAIjqTghl~0oM_l{YTF1iY=1b5a>6{3GmN@fI#2o z7eEXZlf@$M`?GE(>cn&=2Oj*>K#a$!Ow+s&F)=ZA_eZDnRyTVvXIIwZ@#=PfSy-HH zA!cx{`TJhfOgtS>TKQ`Yx3{x}!-tdYEG^5*%TFq|{$S6~5sE#);Q|>76?e#kZlJHP z-u_^El~5o89*1SsFA{27&H$%1-cn_Jd|Wz(ZLCI})tAL=nQP_E=6f3ip!_2(>9 zt+$}JjV-7^-GJHoeADsxn9WV{*KbHUD?&oTLlQ5_Mz;qCB_%XuVTxC~7}3nbKlFQb z1vIpH7-?C+{_iP;{e(sf$?eW{7mSWvUJpP~6yR`coF_(VhX2im@tI^0&?$F{rt>6c zDv)=4$~ME;pD<$lHURJOcTB&BjYh@u$V5J$#cFwRYRVn}6yowL%x$K>i{Box*Mh8Z;6y?!4AB zBRCxnM@L5qy0u#Mx*nHijc#SCIYS++-;ckH}q<~ zrE@c^{I;O;Ke4Ii=f04VOd1a^lksS5v9e#I^t(dVL%qe@Ga%kY#Kj4xgbWV8C^(u; zaZq_Ep1@$85?Da)u8fI@7-Y2~ywseChVc7cJVqwySIlywaw$k0S>}E57b~BgJ)q)Y zPPEo=Z{)|k#>B)VlS%vT=(tj2x6=)S4ulmaoxjf)JHvbX`*@r`P@e2bwQQi^;k6GA zT~Cges*HwL!GD^-z5t-e_JzxtjEu|(xDJyAUpG4LE|h7t4YS_|_?oY@a)0t>oP-3_ zlp~-O2z?+{TI?Wffb`|~FU=3+kLjgOfm60SZ+-yp8oAtlVtpwh3&#iZ>pIXyBJ}3w z<|K1?pnt^0`8&plL_{mBmYN#?bHIY;fio7)`vaWJGoomkEh@(m>c};SrbS9kv zRhYEC+GM4uBF=xlp&0(f!m8Y0^Y=Zis772!y4(i?LqjZV%uIGb%!ejm*i|tAX|Jv> zfDICsX8V={=EoY41xVO8tN@Y}0qlEPU2-xuwQolB_HN8ZK@sfU|S+^Ut?SlrTtS+NK!TG8nvC5)#2qk*?=CV!&(Q zaj`9Bpzs`CjL++Q4*0za{T?QZ`4|*+adFe#2twfNq9sOsZQqgC(avHqT>=hD@Mpt$ zz%$BmZ~T40#l%Xw$~h7qntvM_8rm##g-(<5NE$GZ!F)8xbB0EO0J6MP%4fc(n+3Vq zhrvWf8_Zw5Z&aOui1A|MD%FC_pwAYmGLlCCx4=02;pW8gc+nXQM7FjmP^rHsC%=#9 zh#4fyL7vfRSuc;Kv9jKdwu62syQt_~Pu|__S(4Qo@UH4U{{7lSCQNUiS)K$S+^9;fuo5jA7u{ z(t2@70ZL6yQSqB(?$7!J43la|BqSt&?K^rLd3hlV!LWVfc!I@Z?75vHNI-zagrGJQ zo|YE)0cKvu&Fx7Z3})Ppv@9%fA1%HgflVEB@e5lcL!ppq8#I7^o}{ECrMS4Xw6u0U zK45DHt|BmT@p&)-uinir3My*mtH;O3PaU0@^6sZoK}_>@6cn)b_9P@EJzajy_Na=# zJ{Kt#qZJwNlxaq|87xCp;<8%egw+Ac%DeY<%9}G3s4pA9LeYl@gOcC-a083QG&=Q^ z9Sk4o^@G4DnjSU8^J@F=g%NnnCrc~O@#z=`o}9zT{oYa_yPp2Oc`xn7SxeSz|3_Rj zLgA}ot(Ey_i$*ycUR6yT=IrTAAMxlskY!V?sIVE)WXCt;xYB+sj{nb38%HUC+c;gx z;&OK#eG;zM+AfqClXBXZARm8RV>N|=POP65L$aG!zVa(m{pPrFs;C`vroiTWQ;f8v zdTey`S7oKRloUwj`X>d2g~f^R(u@#zgR7-`wNH%I3GIwlpud(_m?za4R{-CI zm6g>pX1RA4B0?ZGFc#$GP&tJgX`^c=_`2Mr7I)92B~5N zwOGi1|KrD^V9YvEW>(gGWvp62*m;~1QiQ+8`^4nvRI1gl;W^8v5q zbjVO_`f&t49`{8=G%~6Ky0Qd1UDn2Op*$I>ut{`eV&d@AY_x0!)e0CWUkG!ju&^+& zuY$E>wUjrsP9iZoqx&w1h@c?u(3=^77#w+V-roG#!l7#00o5R%Q$r2PV!(iE?Q*fb z8H5~7W_P?0kpmckr3!tB2dsE-Io|gs^dl1{bAFrc>hfjp;QgwsyyH?GO^2GZ{OlZq z#ZW2BT%=`HC?c2BR?rxB|7)2l@BLQA(#`d|Yuh8qWPu2Xzm{X_-%9#@toRDe51JajcYaukBtkuyiC2*K z`8^mCs;iWg$jWNHGt8!}%=48HrGqy)G8Q#9X0S(wt8MP^KdV+O%QPDE^TLj~d4c2p zc*~H>$nEWu-W&25hK;J5aFPSKX4eNXc}@GjaU_P!MU1awDt!k73V>+I`4$DV0Bob& zs50Ttv9mY(V1aWM;KB-BQ@u}s&&kO#F!^?My~GI)b|f!PyvK=rnF?UI_A)LkEVSDm zG?}m7i!av}d*|t2tG&`XOMtBTK1d!x6b1tx-MFxPdD&Rw>hcN`86H8BoR?|N1s=YY zFgb-cyzbC*663SZmlgM?4)>8U-~=nxHDAnjxdAIw^tegKyNl2o`pKvQDA`-r*QKDe zAg-3Bxv{Yk7MscX=CtqOxq(1m^4;r$p~1nAey&g3uqz9D;9r3p;VaO}J}_EeUeS(KZZS6ZkhB_c1AU&3J#QzZP|+|}4g0igTmsG6{(*{M`!lfbMuZm3l$ zvR}W_muM|@Eh&NkB`o|QN!>hxoz=UG0S>3HP{Kbzs!n62O` zR$DwcK$2eDf39u?x{UB0ID>{+bL%F*DkxOfR?Gnu3iL!W>4Yy@s{H-^O}pododA=r z0pnJnk6!eT;NW1O4n=@o3}68fh5a)0eSgwqYJV!3#Y$^+Q?QyY>=KAo>b2&H^!gvz zGlf2hw(t#9n@;TmqPbo=PcqS)F_>+o)lGoX5-;Wru!~_h>?^E4bW9VGL-7|UR%^db z+3Ur^f@ZcjFT%tCd<$cC7O)`3!d5bj2jlj-{YAW;oq|2Cs#`I#A`RrH(END${lnGK zV)a)|dHvzH7UB(F*GXFQr8H`AbaZspieS%9=k<`jy9g4Z4-5^)=FxlS zY?d?;*4U)~mNFAibj18%^`g_Ly^{c3C>ZroJOu?Lwl_C(va@~IBG}&I;EdMu8v+U# zvNW|IK}A+JB0XI@KMWV(L>iqG@6nl=@X$~pf#5fy5@LLd1cY+1KsC=8VJG}c5{y2Q z$muwHx&{!jVCdHNCIR91S7Cl`<@JZ2Oqjfui(oDTwm294%O6S4 zOpzr<;iRE~jn9qECF%E-Ljbvy&Eoe+l$7GISOZB(;zdRLH1F`aUS)&dc@8S5kQc^v zr%lvXi4hD6Kz~wXN-nN8hmt~m*cAI&PIn060;WgvnO{<#Cs?fniO&9@OkjPm-VnJA zU317@>)8F#9wWyuFQBNbF0U>jr5^2$c6;K<0@dPBP2U@aYv=ZiLlfZR(>Y78pT^?` z3fx>z%C*&*tirJ2kky@lR<&3mpDzFbzuW>m7*sG%F4g5AWMG~)yE~T+QT@jwUwWz4 z4Lg(^s{sh*aCl6Cj;Ao1*HgtxHRfaMlY0{&V<5yiUvINvqu+ZCCfry!@2-AydHD&N zWL93M)78bh3!s2|UGu;8hs7<*%IfIqgU9IxZc{pqdh4@s5@5qoz}+{!OrW1uQX(m& zZy%eQT57YxhqgYtw?}#PGe@`6|4%%P85mxiKN9KS=%}>ZgImDO#Wk#5@7Z=Y{c7L( zY`q)6Umk!_fM;IpstmYG`}Vg4B7O+G4#Q)Av-Ld)8=}JuNRhz&>sG(vAJZ|myR2Cw zB;i2&vw&^qbwYDZ=(VR8$e6Afji9-#23@$u&JA`pe_P;cyEUKLZoBL@FenPeq=KhN zx>1fyPCh)swY`;ydGBwcqC$;2m#(7%d?1d74tv@7@dj-SJoO;_?Z$ z3D>)^-EqZIH_kbkF&NfZ<= z-6ls{k8;xlKyK}AJELE$Be@h#d^m>#EF`mv%dlluEiT-ZThV`r`x-jLpe?yF? zpGx!T$_jvi9RFZFN|`L@3Azt3WFXhM-kPuU@n@l-%J`c%U@Th|Ka)uKmyLWT5hNjD zvog-l-FEYU1o_B7$TnIQIuMTekFE53>6itbXSC%Rvc+in>wKApG&cYAPnZ+N_Za>g zW@g6%3JVqzfBq|FYXbuVmJ;p1r)vTn?{D!50QV3QXbsr4sp(RZXICN_(6tCk#%qoaV>!ouO;m7Dv?9gGCG&*hz z{KDVJjG>IuLoyQK(AD28qzT%i=n_Rtfs3m}PWo=hMUoBeK1V>!*5#T$fv=H6W0@^GEyGcOXK9kEemy7k%ZYFo46L{)Bae<5g zfPGl|`{(EHuXn(9(k-$)pvEsn!^+5bzs&tNmI}mkh7a5=mSg@c3lcAS(Kl-#;7Tg= zzFlE5Zqm>I$VJdJF2K*w(E$uG)I%Zvd^UKRiYPZZr9hCnaOIDIzZv~h2)IL_w<{0> zyoZ2tC`qEIxHI8J0@+;~sX1`xC_mF~8L-*gPH*EEbp;mz@n>~)HBgwg zaE+7+@bd$Ye3)f=T@0w5G%Qc&3=SZhHD&aoQc z!06OZb_#Eb-U#uxi=G3YIiRj08GgT5@x{pfT?kASmcN;r+f?seBui zskOCrqbHmf48gVFbWV#(;xIf^F?jAw_vj9er_+W5PWi-$b!UHx4uw%Hp*e+o<$zU)Vd(C#k&I$K1K})&(1G2^|w*w}Z&vSQs)mO7Rl70*gsxv!` zEK5nhvi!uPdK5GcL#flHR&FiE_#V4i^h*nHuoJsn_%+s`nk7{4^@ll0mDHFy4+e^uE5I@ zvcM_m-^h{kdy+z4$17FPv(_^T^z%zw%Z`GfR#8zASaR@qYz?{s5kHsNJNP%b9lThu z!*Dsb?;qL4-s8SJp)ZoIh)8_COs0b6a{bw5KeAze5bxQ=#ldpR!-h!sa2mVa)j%vZ zh+vd!{{?s)gg)yXt|*&-5q$@yNLdBo&K?;a7N*>drBW_4U#akkd1nLM=hj>2-|Xxz z0Ka&5dmAwb1$J{j_LHTSU}Hzi6@OPgC14X2REnTrGr(bk!;keW!T=sUFe}ToxZNBB z7Ej;#<^-8^x z&G@G5rQF@~@%9+M&pZ}=6wSzg7jG0 z6a}Q-e`A`$rLn-2M}2FruDvoTDwTX@SHb)_Ha5nPOM^pgZhEZPYhrAm+M#>KG)7zy zR)3>A7;UOhUL>SR^j!o3dOE#+&oA9h6cm&|Au+K+;5wPTUGJt|c|{L8K)}`K2*IJ} z2bColG}P1}^TAi^Y%40ATu>N9BV5pp+9~Ec?myI z^t;ZXd?T606hs5H*#~^&zyYzexESd7s%5;^Lf!U-N0cS*+keR_+Ujby_XP&#hySsk!n*R1R5A4R*P@lI+6A5Z0+2L ze0cp7mKw8SI4U(j2@~Ld`EWoogz@?D^zR)b7gw`ksoMVHoIAC_y8w848yh9c*ETjb zn3%&I$bh{Q966VZN>F(~9mM-a(?(URMr8|{wB&2%-5|Yw&Ccgi3 z?iu&qFAT=mu=ieT%{ia?g>lw&GzE$sk^ZNH>t5$J-#)P)r6u{R6|SEHq1*KBfysEo z`0db={x{ZQ5pC}Su`!p#!^Tm*gN+|)<(Nt|pixq*qS6TeL;_`N`mv0us@L83{IpI1({4LXe16k zdkm>@u~V9nelsaiKvw}59Q+nXxVo`%wc3{-IQV3k*_urMAKdOP1QLe`=A;zg$Hv@Z zwwlURksY5~hR|emB4aZ(HIoGFT7Vx9E_yIMlMP(7%Y~JctV93I8r-eGxyh>b6EV)& zeJGxeAri|e{Bm)sT-r6a-$bJT*6MpZ8aA3=pWZ-G5OGTVXxwqBiq^k1`|f)zj)4AW z=ENgxNMb$HMc7=qSz)cY?wC?cB7qs@%8;;QaC|Z#lc2-~0bUN+cL$E{2xndJON_{> z|AK=DHgv|SnBwAMwx7!D0_7WTi&-%f!uQIMET3{i=j${~Vq18Xc}mHREj4;}mIvW- z&ab+|bUX!2rUP8jY}BKym-kL*T&gMlCQh%u-b%Tu7Pd4@0@8-12{3KgZQU{Rs!EjHl9C7$)I0?*X!}bg{-nRN`|jY&W8wP!v5q2} zvX3h5g-r1GiHT>PG$8n^#--+RIPWZg@=(7Wunq=cQ}M*!rKhFUkG^toxkx~vgG~O{ zOpSw?nVMSc^7A|3w8drCPKb-^7Ry&E#}qG3uRJ^iLUkae-_Y~)vIVJA*N6QMzo|jqYKYtYNOvMB;HlPBB&Pj6$!X%}5 z)U=QQC$ACDuqevSd}9um=&)bHY%O`aX(`1G*o9^oVyuVSBfh{8kJ6|}R8DP1keY}P zLk4;uHU;cZhWd*d4E)#W8QSaC`K4Ne!WwS*DeO2&z%N{X(ehn;oz6whul&%Q+V_Vi zGU2jm)_Y_2OMV|7QnGsWlFYVWYjC9|L*&f5*BWtMw4ov(+}8d3@42l{jPbHEj#+f# zZr;<*UkYG>BOyLZ`2}`h5`w{f@U)8BMwT{9ji)DdNO{&!T2h3ltLr5I%7d8I>bKUp z(&iRRBG?I9Ve%R5Wk&Y{<(iwAS%!{E4)0D=;Naomua_ERQ&pcH+%8861h@c;%yLg~ zCk5v%6bM1Pmxqk;~WDQ!y5E1!uiOD4!xMPA1IU z9@|d7bMQy9vD8e>I*bMJtqT#j%(+yFvZc8yF2z7Z(8+ z#U;YDAF?6owNABy@OYf3rP<>AY7AD7<8yAf*8PlKUdZZDdmrpSB6Wl<`ol_($r(C7 zF^JfE`vzhmY|pLc6DJbWXi{F8R14}3l}bpZ2sC7_+s9@^#}P@uGPZjxQ~c&7--^1b z(*1jXLx!qDLA^-{JVc-H}_hx%g;Cfg_?@uA?Yr$j+iObUn(x}2hW2jf5^a3n&;ligs=c|8mu&YelX|2dfh_Eto+-TYqW~pK5E6&)W zx@505Wb&OpH2>bI@tBL{7UaI;$nt>r=&45+*w{w3-zb__}(kA0o)Z-S{%T1CZhl>_-7oewipu) z&A|4;Gsr&?5mpr|k<)e&B+;Rjfs#?HtE-t!He^dw2z|e2%BBGfgwuAF=*`$o^b2f2 zLf{HSc;$O@Mpc6-4q&gkI$-gGSrp0RBaK3qe+Y#HQSH)F)BKlQ|7*?gsLgu9eJejB z1*@X2;dP9sZkOKp~KRE%$7j05oNoNHmWuDuk3z^O3E9uribG|}g2wr3&7 zZMfE9xMI%L&-gT$%>b=AxTKy8?a1P2MsULgwF+X~$30%1Ma({bm{ z@YwGm!6}ceM+L-+`dnSvf#D(J=Ol&+Y*R;`Z6#)AGGXCv^|ky0y{BhseH4d_WM>Dt ztIIR@umg@6$cG9#A}X@N!7t%{O>>bYx6c`sxAN_Z<4_UspvaHYY6%NDintkr+a5tF zDNs#nVN0csbQEC=dH`_AK?!&q95JvATpWR%oo-bIx3{;>yJObVjeoj@0|AwQ$MsOl z?C3QTV0S=T!jh6^z`z?cue-o#4DAgsxlYw;N88nJVF?;pI7i3Rc+fU@&3g%Oj$;{I z$8%kRms156iic8aa&o&9c-awRva%yHGk-yO2Lq2r#O2%mG?Clfg$h*10Wk$&UUi!+ z**$cy%^7r7V(aFzEw|UmP0WSl3#kkIfu6WzL4( zr4b_{e;Be`ddq8~y1cPO8aE9~gNU&=_0l5pfHJ(L=p?2a_|We*{jvX6M8iZM0qq@$ zC?m65DXZ*32ECd5G&es(sV%(+GT)i)WYYCz=uQ?ESO%_x48VseEtYso*4tBK}~hXlOfhc)UwP+^-+ug>@9-jdBs zi0>~qjZgxe?6NX?$-bn-3@A`#3OL?hcSFGdM*hmm3YfRVyy0(k>Wd)kf0od$i)HyB zSx>-bURYAn*4!*5En~OT>H=(GDC^b!-#FUAg~+6t)&}cXB7sq^*b*r)%ibN!T?5!z zpdtaVatYr85)!8N)hp26@P(N?E0o-b{v~|^VZ++W3BG~uiTEcA7&4Flp;S6k0l&;P zjlsxoo-MQV8skNPUpy{iA!v$QqVO?w6mismjV7K_kHE@jbeLl2I?W zr{cA3y36iC8-rbwfr3h_opPj%s{2nhO~$+oz)D}J_I>CHeI+l)9XpH&8p;+5xjSr_ zm6er^b}P3RBY1`+xdi~vXS+Dl-@gOcA8{DN!%z0`fcBKVGvRlzgG-3%^mu;*U_ufD z4#Whtq!tjx6nt|zJ&!l%PLkj@2?ccwTSH_FJ5CVjV)ReOonJPTew1)`@7~~f(p~b~ zgh&R7~#D&fSc%SLbIsv1X<=aQc$jbKiS7#%`M%0|Xs88w; zq0wgQY|g*<9W8UsbYpG7RK1}nxGV{s;AQ&}p4Hv#Xc;4* zgCb>V9XT>$QBo|i)`E3+?Fg7c2C2V~a&QJbs5Qogf^FP@Y;#X~4k$smOMPY8W#Q>F ze&0&c&>VUA9*E{uEuA?1nXWj29Va8@q5Sz5m!G})TqrWsYyIkPr5ua$&v6Y@J9#aZ z7Hv&&%W=YD+O#12bEFQL;d2s$QeB(%7yQtX1}YEzyIe??bP1c7CWpMfx}Q$gy9wfQ z&|`8%{fK>iUbUF*?g*U=$B=#!Nn{F3&q#lakfh9;+CRtyJrb5L1}KB_g%>YRyEZQ1 zt#+mc@GP~{v$zVQ7%+^4c@bo7VQj^dFL5u@d*PfR7Z2#)Ovx;WhK8^>XvxG076v*= zh6w7}>V=eOD&S3H(25)u*_%K$?CkXve5(0kYI^z^J!iH*1JcTbRFc!VzP{}r!6>LL zaUX#I9JC5GHv^_1ZNDK>JR29D=}oJZ$w0d8Jh$H%^ipdu<9>GPW)%BpAj?;Y+#)f} z)-}ktt|)fYViy+bhN0DkNZy9iVoJV8WcRECZu zEYS!C-&2|dltvF0UT@hGkGJ{4uHMYW9{M@ba0?K+y0g{Rd6Nfdt_gbmo^Z<`^&IS0 zvNG@g!+h*eEH&wxT2XP7ErCfHgqKcQ@m28~LyJ#<7%gnpskSECx(b=9>-v{g2QvBA z>bDGJxEAP4B_L;V6Wa*Ieb=*$?c|9?`SbWoRr%9gUvf@ z@?A|j?tE|Ax)WCW1TUs6>&)EQMaV5D_xU9F+Vq~<0u6bV( zt18_{t}d2Fnw46Qw6^2b-v-4hd}{#6cu0Bs9_4C;0r9MYuh9f)>nSmQS`+nuI0A-H zCntXmnynVrW zInWF3_ewmA`Ui(tnbV8(;*^QZuY~%oGd79Z{^c4D4dGx-$ zQ@|pcn^iop7tCG6Rk|9@fCeM;s{4I^0Xy$&DlvV1pT(K7S4V5FP?0>2_FKIdSi$%U zCo!!#2!`g{uPWQw5-;6;HJxCo{U?Ipl2@WfJ z#Q&Pc(^MM-?g>#tyUD9j)hAgqYQPJ^z03P=1cp-?eV$|n&CVzu8Dr-po@1ZBggqs_!;@bbJwQRujq9>0N`%1<~6`nL17}pw%qK2PrHG z&E|_g9DBN~XfG-nuigOKV(6~`F#tie(4sQK;!8;nEMR`P%^xNu{hgcy8gM`KKGNp1 zAPu89var-Q7ya)2`Qjmk^@Z-o#OqD9H)N{$0dV2R{$Go@`+n;?)$ zMoCG{tkW@lHBcp>uY2)Z>u-q=T~hhb3Qg67frme8NqT`UhpkXla&SP-*ozJ%EF7y} zqhmvWLlNzF%6Hm>fWQ6K77UY;n`!`QL=m)?-Jb*iarP5}K+@A^#rWvH0HuX4ApmX2YF5zq_)Xp<})hy+FFQ_O~iQoB0cp z!l;(;jU!YP`s^gStTeByDN#}JJ}Su9%JS$pk^;enf*!RZf8NKj?vatj&wV4y03hi& z)}JCw6il5!Ov;??)vKINp!|DOsq_VSnl&leuIK+;V0c zSwcg@EF-m-k8897kJmtv&s(y;qAZdbV7S44_c!_HoP~_& z;FvrLazwi~UEdsDrxURy@J*ihLLqMkqflG?e)o=9`DfW*7EzYCWPoK|_r9u!sO`s7 z3$(XC=oz%OwY%fxp3LF0fQNy`m^t5QW3|-#jcKbY3!8PSH#Dn7T)|TZcx7zu8d=C4 zUCG6_;a$G-g`$lCQ02y-1bkW|;&5jGn-^Y3TU}Zj6j9q;NyJIo`9mRB;c-)Zc2p2X zosBA}JY56`8b-u~M@EE4#)J<@$k6WwR$up?gBEVL`=suKGFNHv&3J~9Gd z;)@bFjMXGG=838oEP)_v z8Bpz`#U2(61?%!GB_%(hs7?=#MBPW38B8d_OJ>noEx}Qrp*RydIvzBu9ywiP40d@3 zSb!x_FCH#nzi5vuibzY3PbILNzMu%0-@YHtoli^qjK<%AH-#6X?a;TzaJwXYkD(q0iaeO3jdWF1X}d`zPFy8(+NEYSI^E}+z)Ty zd3l*FmbMmR6qpTsq{3oB=M*!6A^`ZApzJE1HzoxiZ|8=pQFvX0KLy@makI`occ?K+mOg?^;a}s*CvtBQJ_Z>PgQQYYR+s1S@xCD_hbO(VV zfYZ;;rhp98z!h>+$k$Bd04%4prPY4)8%>iN+kYxFzj~Gmku?I|F7q6 z_xLxKMl(}W8o8{U?d>6%eqFArr6sQ5BQgUqAj;K|P|JFX`^psh6gGbN8lW74Rg9$m z$)J$}mA^d7WrQ@?LT)Q+Fz*MfMxc32o}ZsUNI2!`=`asGY@d<7-C|!~;1d!eJl^)X z`%YJSJj#~U0w?aNIbb%fCv>(Vivz_fA}k!QI7$E}1Z)i-`23*dVa!xZ?0otsf z)Qzd937+otgKa4-IkL_-5sYWc<=qKb&E*M1QSu3W%ec6V_)R@!=2wZmyjO# zcDZiDfco0`54$See-^xgf>Lm|>r#LS*(FXJ1&iFE8 z2jEn~nEA`Y55+X{baNH~G79@ogZ>98=&OCL7$Gpn#7S1tpe}fSub_$dJo8c^=TQBL z2w1ZwoF2n3`wcmhCJg8q{`0~eRq&vLXJxGgrbB|hCm-(hzF;tC3HG?dzBuh@=hNi1 zS5Yr``x?mU%81jjk+Xh(&YsRXR{*Rdz#?Ih ze%K@N-H`)ervL4ab+2j#zw+tRuk4*M@;chv%RT7hoDNA+jvc2tF{8{?;O&-oDvGJq z`pK%pYmSpW2(oQ3Mf(Gq*Rsw?*t?MYU>LgvPeJ)nUrv6EXqQ01O`0bH@Iw~yVV@FgPx2FA7Zo`Ppm@#-?A;hi{ezID zJiJ{Jo>vJ-!KBz*T`D*|qaGo2$S(W9y(`^z(Wb`QjSLOQOG-0PQmL>3Y$8zC@AJ)# z_@DoQBuR~d$<@J2!phkSyyxul`5@T91aV?8zK1OI6^+k*+WO88??C)9TcYuRRIJUm)b@rKAG4!-SaszQt9)JLIA!mSh{4vepDc z)}HoFb#(SaE-DaS5g?&{2LivkoK@Dp+Nla(Y*lq*-^>opr{P=dZUaR$O?6chIUN58 zRr=dmX{dBZkF}Mmqo`1k8%ay8m3uH2B$yCHIM$@`P_A#7Jvt`+_NDt2vSUs|Q*wRX z+ae%)I%WY0@^wjo;f;aeavT3Ulk}v>wecG3^Q17WmV0PEDXY|vxVR6!ls9^^+Sss~HHMs=YbMfreho!%>3?5Kw}E(9JZBK))cKp6 zfH%WKKmdTaqOl}TN9=--!42;sxl_O#QwI1x`{irG5zREPg}B{_S*$+omCsbCk#qxz zBn?Ib?%k>FrH%L@2e6+n{!CMG(}o3|MTY}HW4q;g3etBGZnRd71=Eq5B$z-id@QSe z(F2QBH~~cKy04@JKa@0-#bF__h=XM8>_JWOqKSD`lI31TUDy2-eb z1gmCfF~RdL#Z5p`SO!+L?i1K_IBj@YjzlQKiXy`C0ACZTr}=yUE7As>(bGIHvkX%S zF$ukv!zKse_3>5B)^CaLkg3qndANJa z^~hU5^KpOj{n3hB(i?4mZ(+8*gSUSr$^Xk1r~_38QO5+65Rk}w2m0QRYYX*(8KGF2=@KL?|pgh~mXVB#qL%{n2ELj^Qz(o zE+1Ql?X;KmY2XYWu{9Xp4uCf+}?zb4AUD+2lS=E*_f!x8XD%FLJ;0>0(;%4UB!Ur*nl%?K(Rgcigk+$mGui1uB0Yqw_Z(9 z?wg+u1^R^*VFWyU%XAnE$R9#fgTGkHP-?88P7}cte3;|1@Oe2*5v# zGJK~*pA>!S@O}GpWY8dW5<(HVEAC7}VzWr*&V2kQkZ=SOARV@MC_bx+lRa&d2S@e@JIk zp7VR%TUM%*Ic8B&lm2r}k6TE5-e-8;>40XC4&@ioWIqNdsZIknB?BLtY|WJlCN%Ut zYPqHbpBWO`XSW^HiSN9nmcfA98-`6$9BAu zvbo$gqKs>6Q>YhuvhNi!0Z^ysMePKjZ zRaW-%3mI0QaEGrCoUJXsX`fX4W#RRgKLn7?M&mx?MxV3uwqqWzkcNvZBjz)&jBH!v zkzkY^hi!%McTSMz3+s*`>WG|CHzBA`I633}amlQkif?ev>8Mb8Bfi=eeAVS-G5U_> zgdon$&Brp4!7UjpS54yzbZ$@jd|Y9JWK@@RWAsBqCO%#nEA5^NvxXT*32b^Ndy@2mjY?wjJlHs`CpA`-V00 zEupqalvt!OelcRual63f`~j7eZhfuR!z9*jypTg&?l>oVGARL7{Da0^|NZ)B{%gu; zBI6)CVUw1pb!EP6zF=LEW!DiPS=MVgt-t#!lBv1@PhR)YEl-; zM6oGhi)OD;@$W-_Oc&0HCW0%4h!1}-K@b1%fxEGds5HeU zU^&zbz>lj!mi|osMp!w|Qk!5HIikZ>cZL)FXYQ?66PXf>4!u<^YweknlS}lNe8WY$ zH4zbmjn$rJAnJXW+Z)z`ho{j`FF(YR3LN088%~-zOx%vaHF~RIC&w`)Y+i|s4C?$Q zU_9Ip!Iw9CJm_bXuh5mZUyH?$6iwXMK&PTbyf2TCSvla!oX_a$*zoWFy8DSpO?5)c zKm!de-A3;r)Y8xPE1}K4U|x|K|8g*?O}X^}#gppNWeeu@q4`NjlJMl^^S|axc^r49trGjWD5xB|IM6@)BC3%3jxY5FPGKcKh;j zRuC-WxAsZ~Q*mWOMGXUChyAPgq&cW=f-4*f1l}M*&kag4*yU7Pmd*B z*PqI@Um3QrF(0oz{dHAtBHNdt|Nb+N-;F_JtD|!F-5d4j%B(}{w0CBeaCS|xT7XUHC`9@2Z37}Acp#uNvd6;Q(d)K67Q7EeBRrt)hnXdL_N zba;k>8IBoi9)Sa7*L0je?)DgcIL7kEvZMtELeoxQd^KOEY5lcaN7Tp6!QXwH_%=Yc z=0)uVy6yc)L7nYOIm$W~> z-KBD^ysztVvASD~j(f5&T+Ma&Z7S1dJo_8VvVrBzDcWZ9hMmCpEkEypJ|s1ym?M5B zn)E4LAQNsOFUhcc$Do@Y7q_@TVx*!>{L%US#*w~y#bPmqq==?dLdv0`;l>6lE&201CF*L98q)U^wDg}=Z(wK)mlw|0*M38eb-8O{C4*buXw6#h zZqm^-y3QY3)_L31HFOE{3&rz2-lI)Hakg3oto(u_h2J!P>Un!$mxU|~ajPyeH$Uur zI{sPGG>&lHS8{qKuBNWWI*IONIPikeEmRbpm98(or<^;f}i=Wag&~>N*{*(2Orp^wa4iVu`|dt|nnz`NR$N z>#X^`s5SiaS{w8E{8%hLKvI7oXymJ6PM%=-aWJlM_(%$7wQOAQJ+`MyfAxwNL#9!r zX6C&D7X##!1$P4S{S(4~-(?e559e;&9e9vq5ZYC@qjR+Q(HeC63v;}1abRi zJty74Q!`^~v1@L=t)wXvqvoW6`_+ zbYH{iPgh&ktU|>l#XJ>T-7E?#O8sLnG;c_F>?Q7=$cWRAO{diiG?-ZUFRa$tYCFU2 zCPd3>r3%cW;9t(zt{EsRDrzWi#&1ng@Nm2pAPa1razF~dUkiIdoF_XIzNIpK-GPX` zCp9GZbocWD_WNJml%-?Cl^b+_YCUCxJlziSx1Ok>?}eww^>F(lt4KsP^Vh+qHF)ZB zY=c*|sOsGGY`XpAHV~!5U!YnF>kZ-4)O zCxJ;`lc8KwtyG~ZDWSp1MoxY+iVkcjkC${BGfgET4-Ho2O&rzoIkeN_)PU zAjhEbpKdC6y5iRBPu!xBH|^#r`M#H#l^dm2EUPmrxUug_fB8CRP2KtR4?KJovQu9+ zCB=TGZyqfz@d5(jvEqln2Slzi%h)F$%p;Ii>?P!ivlo4D#;~ZButE5 z?NT1O(TBIDt?-o(6MoMuDj4sMT-2`2ag_-b$GY!waHg9CJt(gjd%~0QdskE9Zp^=Z zRPK@QnfrDl9c8`XV^)QM5Yp;aHFxM2trKIr@J0pxR9M-tED10hUr(|O7vCH7KvJ0n znRIcv&76GkARA0)Le`FPT4SIS04XdyaT`@-N?*$WeJ*l1`7x_9dODt8m2EXo?p3Q4 z)_a*}3yJ{Y?t z!@r+Wllv`|$Q?x0pW}CNlAF3oOm6<=8%JS!oha(-Hykdo)2wW3;yygLe=lXnV_F4K zJa}&Q7w;40x29Dwr>|9g=&)1JTrytP*xTp#J%y+VAQM`WTi`*=%!A+XrQOExT(^)G z;#($Yjg|hYpRrp|nO)S8HWZ8=7>U@d!ayhIddqhD80h#)P>Dm8hZcsT&Nq9GzN*NX z53e><6?tRVt;=mfVSGHh(RNKzp*tmjJXb~L*~?O?9P!yKlA>ULi)PT;3YTXOH>t0k zK%7EUo|l(ORFr^+M_pF-XtDfwypv!w?KLeni+?+}+Xg)rdT?Oi^intaIiU(Mt|ILj zT~aPeA*-*UI$CFPmg+KlCrMp;I4ra?q(uMgWkXo$G|p)OII ze>Ju|I-2Ik9NTi4a5<)iHf&xvG;#d+W=%KvZ$@ulKad4?Uv2MGxf=5a-V&**Ohfj)keAoBdhFpzrw7uV7n2(~Gp%AuzXYz?goed8pVHxEhNMB$7g7)VPMlAIo zNVo~P?XRkH5eP%A?*Rc!`(JbR0%=<<4in%0@U2Q3vkjUd`kvPPVexHP z>v-a0Av5a#8{Yf|-X2CY=8kBCGbmWFXiBs^p7D_suL+|EzRuIGTS#*ztucm)9zFF# zH%d4f3d*(&>I`R4OmHxjG!*=RPy4A#+6eTASSdYn&UX@2G)|XrLctTcyPeKgWX9c! zp;z&5J$pUC{>xL639K9}?WKvudbp=_P$b@B+VMpyv$2vutZ!q~sgH`@KE4_2E4Jfz zt<1Bi5&%;|!%lB9VG(bU>1_9&@5VmN!AyNS?IO$qo!om}F+r~l%B;I-Vh%uHrnMi5BMe_%7O zbr4Hs4dIsu>ELm-VfU=1OZ=G z35k#}L&SKzJtWx0_BHhh1vrt}D48GwS>ACekB-FXCa#s0|0_!NCE?$oRd}V;Dr{U>-9w@GVk#mOU^X zfKj|LQxdJ&dD0mY!S@YZzvMZUoeccKqGBNGw0E$_W~6-LPBI1@7NFsPz%vJmGmIVQ zx(*Kq5*nHa+Vdx=tadle%@!%Lgnm6;3+IK%<0FycPgxtW{PN=R zvNAe$Rw}W0AN@GG4$mtSi&=-2j0V5*($a`?!0j#(DsIAazIbsXEUu0cEP^gOW*QPI zPJ9)9b7gp4x3vXM1&o}Q^MXLA(ZLS0AwVdd^q0@zi$+RC0~hQhtK{%S6otl>H3D16 zYJQQ^^Yoq$3-?8|`|?00(%>Mwo$)}ng=n+y!@u55qWI$Dl5N7 zEHuT?%NWdP_Rl#Tw|wpnsoCF9;%gkfXY~fbV!~eAqcPUHj8IcEXl+8aoV9c%Q;|jn zft`}wgAQ|^_}sh93)?>Tc*W--lg#!Pl`?{R1H(q+ue*l?rpSH%kNoJ%;I;+p}i(q6{|J~|MY|c{7W)f#er_O!6KBx~yK`T%Z`TS+ee* zWo9H&kM4rVWr9c*GFngO^(|*f>qY5CAv!L8{vW&ZgU+w^47!tHJoimm^1PrTkrq(! zQ8P<+N44vA6QS^6X%z4+4SE9Fj7h#KXlc>D4wOI1C_=9agvnWYLSm^CmnL62 zKkyl7=T=8A<92!^aAW(OQ_jf6>$2AbdGF}AU9LR_N`}t|q^GICd*7UG519zg@bF6K zjvgCn(k0*xD=dWu{t9YFLf*-(h&qhtR|sbz4Hgfe49i;Xcp@5xM1n8Ml(Yx)Kuw0ezUL@6j$*!TLicvV=>}O(<36Ws+@MuX-zc(?5 zqy1>rU8T|^Qsr#aVHd0#lmSqVsjBM0Eq-w4koI}M7a`w#9)EP*JrzJk%0Ld{&ZNOn zkhaZHyT+A|Qm*-oztZTFTgNudaB;MNq_{C2^|`am+JuRHO04YfU|H$MCY$hyPsGRP zw-*?a9b}AyMKfQ%xUVgN-g$XLWl~fj5gNEv06#c9{80TWCnYfOV$mf91^1oNbC@%u zme0XOUyqeYMbe~==u&nMx5=v3$P(?qqJFROdWgS zrWdZFtz6frF*7?REgjx*X85>;G9}DTCRD=6XXIq8tPC_#vOwdy>~Deg zbH*${qe%3&okjw(E+tGsCzwqR?mDJa^rn4|n0YsLt+=Z6XE%#c?e9UOvAOSNe>EC` zeR-CZk`WTX%j>p3na9TB@3TQ6-z*EM-~3UgWtdRzpH*I7nQyG%q%!4r?O;qsUo ziQK8T`g;w_J=5FYWXqWu8PI|Y3@`~r&_2@T&>q?g9xXgh^yOAAVsdIzALo?}^eKRx z5tsaMl`)1D5JBd@y}zlP*OH5GW6b5SwbJO{r$c=`K z__$z_eiKU2PLlnL-z!&fF2Z;90n7Tq$Y-epA}!*ML7a-@4+IV$TZS$*DF-?GKl-QQ zwt8R^@j>Y61WptAb)?9ZFoW%X291)!w*)ACn3XuKh-Bb4#xvbecM45uv|GybD;)a% zJ)A)ay7^%-`tC4;`jieL3$3(kBfZe%pRs!sB~B--vf4f#6RkcuHD&iRApABYDbg*wxi?a

JquC=K0@(A-p>T(WipyoB9+qN@3j1xx}$GULpc!CEJ zsM5_m+sFC0E&w2c{#BApz?x_&?nk`r#y=3?O7b!~>h_h=Hv@hk33j@NMQ&DDfXK1+ zajklB(l81hEx|Lxal1GvBrBLUv*aFI>Z*BC$TwV-^S}7MA?EHCdbE~$`N;3*8}+3` z5IkK9cJ{O0(-YCqu;@?NdFH^Z`MT#N=wD7LusycGqQu;<#}K6wpp(cxrWxYE8xCoC zog&Pg2PU;7UehKMdwB4O_sMKZg=6CcZIu-$2zQ31G&v>SZ}#n?9t4_uz$#>8a+{sC z5ljny)gt!0<8;>6iAdw3prL8AS#*G7{0d55ikLgofdOkCZjcw74-EL|HAdCE^|!?E z*fv^ysTMrwtSFP2d$BzAWzd6dlI5)iLk^cci#D8&iqKdXpIz|};oUJMdMMdcriX{C zwRC^lR@XnC*C$3k51U74l4X3wXM^yh$>7^|@i(03C4fDWT{ zc5a^2g)D~($<5CD*<3R_W%lcU3;&gCqN;`lJ8L7zvq>!)yn~B7OcB&oK$c`@XP}@! zsQuRxGbf_`-r3C2<{`$hC_Oig45RK0WwbCyAz2Z>Yo*~wKX0?-vo_dvbKt)K*r9l z85u9`K4Imt#Q!CEl_5Fm`JL|anvz&An5t$Yrm#DZlsr6oG72}La#CjGJ_Ul|Fo=_h zb(*VUsx5X9t8**+FAO*>X z3nppGOiTOKz%oE{kH(nw&9F;kbl#`!c-8M)S(!As_Pi;LVmca30v31PpPo|&vGG$q z{MT4)#`7OcalWS$=a@d(D_yHbFv#wG-g#<{DBekHg*L zCnESjy^t@^JwHU?8d$at(-SG-UdTGN4a$!VvADS7HOq{KZ4v8f>N+I7DkFk~nj*!t z!TFLYl5zAOSwhW+vOY(*yd-o`7rpQ+7y6`3mK=Bj zv})dzX^}~1`|A0*h8lA0e#A}+X!md}g*1xIFdxh$^lTV1>TQ>-~UoQIzFz&CpY7VZtgewJkc%(~1%fy;Jwsh3gC@~@(x|nqgkap;waf4LAJeT{kqbKjEHH5%(T^`G9E;Kc*{8>$g^_C2CqEf-y3L^OQZsSZ0ls@6% zWs_nv0I@2ysQ+1PzBuE6nd(auz0$$m=8dZEl7xuMRGz3sV=AIrm4Uu`G|T#}EK<8+ zKKDPosesYArHr(K?@dDZroR+eP^15iGr_??!|(CKE~aGl3lz_oP@OauPjYU1i_zfj z1TOAh@6xH@fxC|e1@Ks3o>6k*FOdaHkm8$RL0zo;+_WU4{rx*FhRCeLH*g^+98xGf z&0_LZqf(5LiABN;O=m%c@MV6r5VHseD-G9H~CZj<9C?z3U&iRyEKnKOVG zFT+Kt2&Eva%!-uJno39%!?q8Pj7;0CE*t-{SwVX<*p0@ANRFBM;RV|cJW{;G*l(Ak zLK`b&I4r-rU!iMw@E*+wEfd~PcOtMD;j-^O$c^CY^LGd{*@45;d`ahi?j(!9XYtmn z3pf1)1ABB7C1m>toO82N2{=#yYi2>6&y@e2wyd0SgkpMfuq>!c8ylk=?G}22;AFGB z|C`EhB#0_btZ(#q9UiMq!LKL6%EIymyd95oKh6={U6L((&wbxa8jteW8FQ0`V?hqE zNJFr`u&Kj0#?Ugel((CGrcZb$rCM6}7bHUPuIs|+&rj`h9=lYf`H5P?;xCYjOfq2z z+^>9==KJqrg;+H&*VkZSqk~(Z7PukJu~$Q?cn$Rpx;z|4O*jO=Sm?H4)PTAAaJC}X zhYi8lpX9x1EY_uks;{5cN7{uF*cc*`2?2%Dyli$49q?#cU|&lPXdIH;f=Ec>aL zG0Z@*(9ElF1_)hnQg96|-2bBMtE0N=+BQ+T8|m(nmhSHEP+GdXyBp~aX`~yZTTog` zTBJK9X7fDn%zW=!Gk+2be&?Kh_P+1yz9Qt?5~z;t*1Lts1Z!$y@4$;DZMQ^mRZw&O zR|V?Y%?=r^T;M^;{U$vcYE&Zy%N9cC0a{s796{(H z$jasAw177`i2b<5=p*?tIvy-W85A)wJk5`T(lq6J^pp^&HhXi;3oZz>hl2V$dAk&L0ymt-D8DDm!feT+^V$U*PGDg>+OzXXipVgSeRXNtz^id=7r^1AIP2ZFbYGS zutg%8k(QH#hu%BjAWX^J8(=S4?(*A`p{UE^H5(fOf)k7vh$%sk*UpD=X-aMKvuQ&Y zFP@IQZ?CvD?F%Bfa`BT!*zQ4x@gTn$dhmp3RiQ;?5x$q$CG+2X+ou znecl{aLqIbF0-&m&|-QUiDZ{|rvJkB{fJ{8MRj(jNA((n|NCEhD{>ftUbxde!}ZwK zGUPN$2HPitU!6lc+lQk|rG9Wt)mn$kvlNNx?UE zH}L!Q$5KM8(s;YsS}-F9Xtw%YysE+BcpW;HAv8r46hPVc0(+`62CBcsgZ2jCPzNR* zvtvg{G}Om>ozUp0sO`7_G4_pW)>p5lm6TN?Ec!MBIU=C~X~|S!ghdmp$eRGH+?ENi z1RA~E9#Z0CH5p!r7H5VK)j%+Z*e)&EF*6&rwzA?ts~bcn59MMiRi{o;BsSTBb_w`o z<8>KVJ(fr&E|12E*0NPQp{J#-0HHJ=yj-WpS_s}> z-hmLl9^w`;bH{b)rJZcogwF~1rU<0pO(V#L0?V{z>)Ej*r$Rpjcj5P*YY}Eo$(Qn~14Yje~)j+Up1b zQb0q1NW-=*CIVZIT}-S$?QLNG_XQ0JpNA&{JbuF!84@^rsEvB!`=P_-96TJDR580* z`A8I_It~g(q!w#4Dsq^!I{)f%1mx%kXGkbynW(pB%(+U)9;pu75J3?kDe!tvut`zc zb>dPIFs$wP2#QdZ92smKoXwtWCxmrpD zg@AcmS2xkaGP(ZDBPb&`5dt_v!FHke(EJ9_%0BGdMhdI+ED4y-~0%#Se7J{sMED zo+7>oXi?EKmF_|{vSRmWR4WBk5KPjhnBJ1eujZs>D=JE(f`IeGz`#&~TuPEeS9kSJ z-}>ZdMb6(d;f$6TZr77ou}FBwzQ;WRIfi}OUog`bs{#AOQf^$!Mt)l zW5Ni_|J>sogl~4gGr2yl;POw*Sjb$No&8ye}Gd0XD?qao>_d|2SCBVym5_WXg{jXbx^#(>6W zkc4f&ywzZIV#Q`3p=jNlR4BvVaERoQ?Ln^hz!Aoj>DU z+q1J23pFVUQK))n;qAeg=lOio*a&nt3~$Z%<_{Nf8|FqHPU@skKkN69t3wVto}aHv zwbE(;aE3eMQ|70Xl;YaYQz0cgpoTgOai4E;vU&}zQ!k*325UY4B;zFx9wrWRdcR=b z)QO(~fpZxG1|u@56sat!LI@XmIQ0Ze1Cg7v0oK$YSL=vV>*xxS>NVO+|EQHcus znY)k4)8jiDP7Os3Y}IIK`dAp`?YY^1>n6HXlz#Z+9d!*sl;hg@S;GZaIAabz*Y5=U zmlp%9`+@caCZ?ulsf<>l0^%nt_;oOB#)(ny_oTLToEymdJd47uZw{+hi{2T#JiQYx zs=oOAA^41nE0YV8wE`J`V`zF1(YOm$BkQK4FfuYb3W2{o{!z44kp-7eq=@M`FY)P1 z;#iZFKjExWZ)@gLSzUyRyA`y^h1t)P?d2BvU8ba&X=N6Uf!MRpN+Dieqzat9C%XNk zJ!IY_;V7!^4zol9Cyfh#ejo?!UbH6S5Ca1l3QL95!X9rbu;34$pqm6L=3@2Ip-z7kgBocn416Kg#gXysyUgt`0Dd z*T-X`$deF=UpA^3^pYfFj>)3k0+aUOJJm1V*auG*49KAF7r?2yL8*ds_)GLk+1G4B=v}S zt+@R#Fmb?74!AO^SK;WyjIoenC+2NMDHGYCOi-sXVY^&mkStyECX?f>zr(sXBCP)T1)aeR4vnMY&OB#)Z#-BxxwI;UOnBO%1WBY<*h61) zse)_kFANm&R#6HMGqc!2mEpIX8o5vSkwWO$>uF%`^YHq%jgSdAw<4Gh7FsFD9^|=t z6Ua2)*MBa4^%}UfGk^SriHRYtEI)FsF6!);nDKC-8#Ovfu+X|QSov_$BPZT1tj^Qu z5cI9g+C(v|ebG#+%iGP^Su}-ViDIm8B|thOARrCYL4=Dv1bL?v{|!No8To}mZMYMh zGjOOq!7AjkLE1wp1tP1ccp%Wf$~gQ!iT5$-^L8Gd0v+{sHg!g4Cx34)G*#TCRoKg5 z7m~xn(C`3EnMh;N2;*-OtGdY=q`t!yKpSiQh`KBq%;fZb0z<4taYjbQB~B5LCNxs8 zZ=7~W4r7b!zBP;iH8nW1l>Jg><$W{#u>#>KtSFtp^yYHuhgs!fc@qo4_9mCtA(T4w z(thGBQd^X)F#nmm<;e&REOrNAz8*j5bDM?{eH1lE>z_wrCEZWP8W?arRkrgyqdyZ} zT59!3T`4H3UcTNX`55FxC6^EY9=r9s;|A-@0dcLg9;YY;sDvwpL-zpHq2f-@9Bd+r z&!wd{y7U$#*v(q$BdZq!7P{hwI>)(EU<_I@a_pH3QLuVJi-j{-(e9lKVE-`RzveWQ z;)Hye`09)HVQfnR@!-C^w89u*PJGqjo~EI!T%kE>bB=Y;ba2jBxx?^&2)vOma^!yo z&IYDCJ3A?60?Nq~q8>E+@f9#l8Gp{=GOdsFWHH5|jM@4BtMkBZaEg zs~>mcwU6Ety@2F3zV*z0umUybh3oWgbLb76^ z_N}PU_HJpXA5ptM$t9$yS%EnqWXns%YR=A#1Fz&BZ-Ay`w0#Yf-_9V!)grgr9jtU; z@O@nC+LMqN7MWzJ1OG%^No!fzs6|FB3&oa@aiU~NG^!#a>a_-YN>Ew7)liwge~#Oc znW@=~kWU2o;AdwPzX>LX6dH4~)ve}|M#rOjT-ncO>9jwEW`emDi<{%*_+;$mnTTe( zwhER%&gs>%--6F$F26srQ06O4ibt!Vd(H|F67k?LEflKybccb1hk<`5@r#X(eP?@{ z)RMQ+^@}y7&#l-7Am^y$TZO5jX{b%o;^AqK2#JhNvz&b|`XUCKv1p(Ez$;(rc`;E| zR?F9HAu0JOCv#Lm36;--1wdVx3{jh98Plg_)!x3uLWs)B$_`fZUq`z|d?uEnW1C;0#%_-z$;h5ML0bhS`yCnkWce2VdQU;p5 ztX%4P5~%+HCdurbslGNGEw8lZbbntco1(xM$EaT?VQ9E!@?+X>by<_WFjzKaOvb=E z8ovK|p{r;{YE3Imr2%rS+k#oykCL0aOw3#Act28A_?ueFCd42{I952}Fl}-FVlqYa zti>T+>Ybc8vx=`IoqYm(2R@f+AvftiNDsBFD5WNQ1t*gkHdceNF-I0h`Nr`F zHX9E=i^E+Q00hY_emOdk(>j?gv+H!7tAcU0W(5mtXRyhh2F~4Mz%;@Oa6_#P&I1SLis&trvb{z z{sE;m1glK2DCM2g%uMp5(XN?=n5eeL#E+A;U^tjZW&`U97eJ@!4XB#_^cr~i07kK- zw)W6MZD`vUmY0tJ``t z%xZBsN;PkT7kSBA8iwxJ^(j{-2zguGM@cRc@tu!`e+F!s2QyIM85DFp{QJ~LYPeNJI-SC|hSENlv~S-@mFsc_=L(Fzj}yrg94jpa z$Pg||X%a(G zJKa1vD<~*XH)D}bUV!;z`j?mqK+Rjp=gx;e`966EG$8WYl0#<7_IApI2_p}l$9(Q2 zcDAJhN-vp`X>CZtCKf(Fx@PHyN3;91t|Fr=R zH+S#uN@yjev?TFS7EhI76Ovi)!9}(JBnols^Hayb7IeB-bMH*_# ziF857rKlzr;okVago(f!NCsZWiTmMn1B_zdu_q6_%tjm|2c*LXtV%gN?s$Jc6}_<~ z%yO}riHQUrx5SU$&F|P*+YfHC;k9I@u?nDnHv5w3YGMQrO;}%r<>&WZ2cx}a(i9GO zeyIh%<8b7G>IrjmOZp4rQo9+ZX?TM(4acWqoypp-Bd3_mk%WPwIl|gdP@Mw+qoE9i z${1f4zR;hhf>3eYK7HC@nBUNmPx*!8vuocYCU{X}@Ynviw`iN`t9hyI>%7j|;OXxS1cJgpYa1Y$L1uBsldt3RxDtgTTe0e~#wK7cF{a4$mTK!TNr zHariHzz6WAxDzI)NL5+N`@6IGDwIqf3WqUMZVxnZN-%Q+?Z|3#&`sa}dCzb#_ z=U`m@o1K%sJ=667D7qB-Xtf3y5>F!%0p$068u36+hKZ>L_+a>)R^(1N4kDc%PqXvQ ziPHm#C`7!`+0gnG14Bdi0-g5O%V<4MOQL`VrLdin79^ar(&~ua@=FpUxHy3{iAu!r zmi+gMsJBuOqcX-+uAuqrbzzCgtQPG*8Sh4DDCzJz$clJ?NwlxPBZ$j-*oq9zFE20u zJnXXV{K#lAm2G21^Kg-lm8TB<$~C=5kro?0axMlWOUeHl*^=f(XR(^fz$+Fw+& zdR=KWT(ABvH8MKdnSiZW?X%*^`i$$Bl=Enl2AB$ZdW|-3YQU01jNW`=_;n`e^cTjD zysqwhVj}wWBw?!pwR%vI2?GEH$}oL4acVk+gQP#fM+}297toWm6GilaMyj`0Bkzy3{fS(h77j96P(a=H|OQ5gi44;Ox!F z$hegm3mHc+y|~JHx3RGy->M&UNR~W8kBKAU<#m6^;~YgXD=I3gS#7x5CiezYP3?f! zH4KT!3xs9O`dqXs%z7r=UFGaigofy&Z_Ou=3BSch!cnH%%+V$y$xuNww zfu21)yor1#4M=(@-4J+b#7i_42q7W?0Z0l86OK7iB2MasV>~_t@vn?Jq$MCOHL|kv z6Y^4Te9Hj?t8{`Wa%$#Rd9GDs_u;dv-Z%HxM|{@ThapgYFk@us=;$ChR*X7NztM5*SFl z7mH~VE)>8{`Mq1Mmtahp)MuDU;^YLSORn>%eD*Np0Y)nk-!Cd9%&$UX% zQR~PQA+@z9AeEO?HjY@p>+gJZ(b^hK9LbIM?eyyEBZy}*G&BsA765j|KsUXKLw0jh zAXcq+=KbCOZYM3Q@Y<)-`Q{je*ll>8{(2QJ)sYtZedc@L-d-&_rS_a~5#PvQ?DF;2 z=T1Q7o1f-uKHns3y7k-4fgXxTGWpGa=`HmHn{hD zIL$R6lV{#Yh8mUqI6Eqj`s8B3MJ$sDt$@biYCEw27{;$g0oW+BDT7oQn2`J;>D+zT zv`W0trG);Gtams44wzq;i-muqwEXbF6Hp*4kIv&c_V@7|u4vC=+R6P`d)>bRo=b-@ zB4?ZV)j64%REm`p){ccrAB95320GBBo@e5Z%?)DrB&yWSRmybE8J^WOHFa$rbZo~9 zsr#*@e0czy#oa=lf3AS@P`$YVb%l7KDvv{gf5JYnPF!xX9ltL6)wVSpO90Y_fi){J zzmGKeAT0jQ?>ZA;PE5k-RGgG_`oR$FI3SH?GK*91^6!a=h{!4X8-({$YhA&sD-Vy4 z84j=XpKgrOHkjs8^SSRxjaJE3Rc$l zZ|FiT8pJHq*wxh5LtDNp-M;>!K7{y+D=6%^$#T+eiZ9*v^Y4JBlE6KRTFrTvc8$#+bQfZN9O98V^b&e$yEq}K!?Gxs_(6f&zbVS|iEzD$!t0PfyetB;qh$j?d@$3 zm%ZI;DKhib;oQNm-e7)W`>$UEtMD@J5#C^9yPj=IfFL$)E_Vm*dKWr;rzB~=Q%10h z&)4V;&&9Ratq*naxgVSMy?%N;5fT?o`gZC=C>7UQa-+{ ztP+XnH{qbJUJyS!TxtNU9FXz7lg0*7Ps+&Qa=g+eJ{AGV=CD1ISP4A3&d1*pW#?pQ zqtMQRdqF`?4rtu(t*m}dtUZAYFW`{=o>>j5to;0Wl8XAgg|P@Qi)4cV7YbaVc2$G4mgm!%kTdA})Hw?)CDUtb zv@YO%gF+(sa9$$Epj8#n;jz*8H4qYHBA$5fp_r&+CA?XaiZ^_^SqBe#?d(bb*iZ|7 zR{FcvZ3}#UaMD=sB|&E_YxQT2LF>Kq;UX)4jov~{8nb~HHmTPPyRSeO3^INv3f%4a z6+@(qw6w=lc28>n0em&&pvgkx$5@|>yV#hRArPkXp)Uj^JK+C5=xA$G$l%XlBqH~@ zy=7%*XZQa^{N_t5BLzjU3-$i~r-@E!%U%KRYfyE?;B$@v#cG|xY# zi@lI{2CX-Vv+gJV7zS`AW8dZLhAr&6A@e0fq%j_<@UtnRegG-=|D8N?-dh#XjWU_p%hez|_+f#1ttPqOBd z(S_$hloL#%@BOuz$w(Oqo|g#)U>v<>_o-*K6AK#{d9?>+9u5mWK!2*l^V%AK|LI@e=p(-q*c;)#a9%SnK|;H(z=0!k3alR^8c z;&uAhcjr<`yF64SU)xYXYSqXFqW?3n)b8x;1QDrRR^(WclI3Hjd)q*Lsg5Ego06*GDa$eJY$$2&Ms|GIhM1|#E9$P2wr*zerkTuY zhx#o$dzSO9HN0wHL%<9F?t8w-ALKC?d!w>IinTQmbw<+=urU(vdIq351yKJPc6s?7 zsgmOo5K!3LuXoCTjYL+~>tS04#OZ)5*_iIUt4w?Q{Raq<;rBg<3d%QJZJ!V*i3H5W z#P>@1(_dIHAVKO;mDR%TaR3d9ii!%@dSl|_N%-7sp8Obb{QaM$;z=6%<)PRXFE5wa z-PCqLU_p3jSiRkKl}-e|_bw7agV!8A95n+&CD>!@6ZN?4!|()`a8D?tT;wd#VpYwH zi_HgVO#iw31WH0BB3BtYjk96Bf8i)_+q3H;`^o7dz|;9T)jktxG^ zJs__9di?FDi#L`n4k>S%q zzanCy#zx-zsi$&-VDH-yhgWlzS~R{aAZXTRt@AE895pyNSg6wg?86qS4Y{xjQ?Y9 z4g_N7H#cu|dsCvFg1&pN3s?#DF z>%)KyhtGh<3if?o|EEqD>tYjQV?dQ5CLs~>y8QhUI9%2h257@0AmFomU8X9@}EnTs05Np3eK!{w)%RR zsa#*(I)1>>obK`#YqkljPt?+=Hr3zW~eD&PCydYT66|6E)f7j9t9l!0I$}^EOUa$p`S#>5ui%UyPdfgA8K3`et zE(?@Pd)rU(l6`!EjCi?OXEID01~M!btF0G;CWL_jimj26iHQj;Z1{uuw6ke%+R4#! zN!{555N3gLPD(Kg6gpUAN@O&goD__VTL%k$`r5ca4oWGG#A#v!W|hgG5IFnA&!$We zD;x#u=t=E(Ja$RF zaQ{Uh%V*{LI6VeZdr7L#H~PanTu+Yw{NZ(&Cf!3J>h^hzGEZlvCqBd=UcSF-^JT%o z!AZb6B{fu2OBRoU3I&O~c3|#NJer)?5B%U*-V_i1{COWQ^i*bN#S22oxonro71ZB@ z$PWSzn^q8D_brDno`|nGUyVm7JS3ZETxjzdgm@H5#t+5bt>$(;^jnTNU2@s}{N)&= z=MTXdFSStL=<|Co&40ogATy?OyqL@ZFUMvKqwJeLkQfE(R|#S`i2`-FO=A<-j!UMq z{$kpcAJjW@l~V`ve!6Xi|B4KyrN3KADr#tGs<6IgXV@EV>dsI}1d4&B$jA?WufvYl zG_HBW21iHte+5DY8yJ9<3mW<$YUk%}U%97SNMuuxjP%oQsD>5^8M(z~sor!Hqj9~( z*LUW}4}Ktw>aok|Z8@@tir1*t1&vs+CJTDZ3cB=ONLW~yZ-aU0r30F>yt}(Q*4Xoo z+;#TI<}|2n!7gRUprt!sb$7Yc0B+>q@UYlCp@5gmO^rdQywF&v;>5SGJ+Pg$%27tb z!+Uyqo{Ec$|KhA8cnz;I(^uFQ99&!b7GY4_SIoe`Kub%jt}K?K;Nm^C)25<}3y0tw zq6bGwkkk2c;&%t~tm}V8wvn%HNO$s}*>s5O6LWK$x8v`&69#1=zPpk4KpvKec0)#( zB1bv@p!`vAfBsqRvOM{(SIf&Z=078m)feNDI=-n)i+GN{=JKvx6+QZF%;K|$bCI`@ZdAP<=`Xr;d- zdjB@@7trM0F_0MLz5pTXOKl*2=I>%~H&x-t+X6I=i&+rKNTk8Quq5iPHa1;Oak?qq-jIOz;fg6hh)4#ffvv_Gj}+dpk@jS(^tR#u>2xtO&hKg^wvxBMk*~rEt80Q#v`V^7J#dC9W zClUwo zddhwd+fYBPMC_?I2%?KdY)~ZTaBkDx&G@9Qno5I_dtfGGx>E|ax zd>p7oz2{Dr#bq8AE;dZL(nfK0&1`%#nct+Pxoe*@%=@sInu?8%`ngK~c2WCoU~KG9 zSxd{pSn4g2KZv*mUFY@w>fgohMHEso%F6E*2TEQfZJDMV8!znlENu?pN1Q1mBSWd( z*7h&aT?O}r=+2xNu$4DiI5bP3B7H{_EiInc4GGTi)|_{wf@F4gQLdCGmdzGk9&2Yi zCznhp)2|K|&Uer&j}{Y22T1#gcwL5QwCW0oPknK@d^j+1gk{5*TkN*pW;0SfJm&H7 zHgC?8%@U7w0OcCBJRQ7haL}gR`q=p7Tw{w?lH0onAd>(NF+dK{`dJQVKgzl6W=cXq z!S?oau1*5wu(n*(gQ;B z15f6=(f}EnZ<|adEmHdO>au`hM%7=L9jG-yxDogjC}`L(P~{%s9Ng1;FkiWXGW6AkS7R@%XqTcDBXSuxG0t8La%j1%i zSWz(0ysf0&k%r5G1|}h6=;Q>=i;KVM*?y=82{TV*wcrLxJ{Uqt=K;CtYTI8lhR8LT zu=Is5@0rsGQHfV};tV5GzNFAYhctASN*Zcv)-+1NtGpOMjG=pZ!Mu9)B#Oy7koQR? zRmbR1U`2V#u=1TWhnkdXmGv?=pF6L|`>A=RENP^3{*Rlqn{VK|kr3ZZ^fJk}Rs|3q z&UY=mumn%v1qDGW$jk5Z8_E3tP>_i^`9?J zbkMSveR8IT(P+617GL5aKp3+N(4O16e;2j3v;^>E5)>Jhx#z+$JNt|z4mD1SUAc3z4#oUgs|BEaoq{z1*eP-C~$nV2&)jsuIOW13%6bKKun;q37zLi#baL;P(dvg{iWOmm_DJr<{^KOSzj)# zp1cSnj~^$~OisTq7#3Y{*uz@$b0^CCDyq-Flaaab5uap<<5EdjO(}qHnL&%sB z%w(xV6(t7Co&$8{KAp!SXg4&Me*WC7Cb#?ZUA{@h<9YPX12oVG?#$ihWU2~FWp-aY zJ^lsd-15?L+L_}L0hf(-m-*xpyG7FiQR&Ed_WAQtFSjSdaY3zF^ zr=@AAX_&rFMLCvER^`N;Nu&T$#XA~U9||A}yuQEs?QF3T9>@B7POiZblc#riN*(b0x>}v5&^7S$6A}_WJzOZQ^SL+MudlTzkt~vi z0<6OVv|d+#UgDLPxojn4i4A%T1&-Sk%hnd`*A7=3_L^(oj^Z8@aabl~X6oweuYlgb zeywvf9U)!7dOik^Jskp8^W$U|-y0?V-K~queKpgm$w}MWFCPGk7L)TU5|x;LvCVoR zhtD10AMLJ3ydWccIG$uGmpvR?evl5Uva7zK;pS*P9wB6SY02|X3LP>&M?`GwYJ>Y* zJE^dv!$W;#b&(IFU0AH;^;V_%oC?_JFFYzV4O4|=$^NjpFr?JPwCGd8%*(^S)FChe z-utc_C#h*!85!sIt}Anu`b1)7h1QwVeqsZcBrIQthll(63PCOrh<%$WReClVO`4rm z$JL&%+81;-_wnJ!*R2=gwQBFM+Eo5D~ip)^bx@Ti@%Iwih5;2JZM*u=(2G-`_#6Y0&8s zIhnq<(l%o;5fK#yVvPT+RTJbc*O^G+#^du~Jb`&alD7l~8X81(!w(SjtI?!Usi{m1 zpbEVKRfJ=01u??07M9bO~}E; zMchw%sK&5ubK^L!`5E98R#sLZ7QEm2=;+p>QS|aF>PLxV>`^d`yh27gzP%^sY;Cak z2D&lguEW7>*Hx>1E6Y}%;CGSwxnhKMQ%_4H?1`6>+{6p ziyzZ#MTF2k8X&tXqV)f*>A?vO za18!8ea@c^8;yZw)-thPl zRul$b+)xFD7S+_C2n;}(ngSwXKI94vDzj-l3)}VaD&O|lu3n`Q3k%CF&`l>|dU{4$ z1eRA+IOV}Lk-be|qD&F{FEDs0YUWM+WO$vudg}ndgP>X zJUl$jR|jIFK-{TnS^&VEsZc@@2}Om%fM35zwbBHI4sMxb6KO3!0+M2GkL%hy3;sm~ zMI9c0{_MGPY(RgR$iYzE0J(4Yr-wEmhMqRn?e0s3LDuqP0w8FR;;#2998T!a0Y z$gl@l&&O*W3jlXw(reD-p87)cI$oqodj)Cm0pi`T+U2Y4RBTnA!;R&XpH0>AciDq9 zm``}XvtqGw=OsMUJQ!*d5V)Dbbt39LAq z$TfBQUkeV?-sV_HX(upZw6MHe1n$f)zRjwlCY~#wzdEu*j7%fh!6QG8%~vPHawkkA zERUk0%HZW4QgP`tb1l}RjW9WsJUT7G_C{7Qck0}pMd`2?RH=>yfV(5yU_QA!mG=y` z>0<$}%i|e~FImfAzRT-!2<8&LaBs>GqzU07z%;M}34gZRfNw>$T9 zDLOIn<)ZCZS~tM{zUa8MC@CP++}sQaivmsna{y_-z#0y5ii%)-(Nb+B#KaU!q-*uF zaUWDP#QYx9)!y!=@OTGHm7u|Aa6AD^WzkFonSd)KCcWqVD;u}t6&do9{*@IUF!(q- zJL^FqnEz~V#9wFKVrN*XsWV&FtBnP6V+#c_hfZcWpeJ z#11OBh=>S)aE6A5ceJqw0qsos-T7cp z-xC-~VRgEntOW)JN`DEgW-=xG6Ie%!%ZexgaKO`pJ3virJUDbg04nrH*m$Qwk-Kd` z1XWQ<6#ww`vD>muoL#g+tG1?MNE)*+VR}0`-{o?jkCcZea+%=OCg$d+T5Tz|u&A8< zjmV$WX#*eDyK`DYv7MzY<`3_bi2o+Ozk5}(R`*y7WG28lD$JbxC!#6_YXps}=m5$0 zv}wHon40uC#l)bod~*n0M{uAB;k@@QAYhQ_0SnfH&IR)c%a=y#uHu&?o%eFziS`bav>)-so1TVurDe4RG1ghid!r zicU-d;Q4W;6T`!wL6tAc&`;*&_jsj+3)SWQ&hNp4`dSJU5efb;6GT^6FtZ)OO_HF_ z`0^_fEcBH&6IfMm;HCr$QY$?jiO1G0IY)%l=SblPQ0|xo^gKKJ6c$j4ydDSy4*74U zdh*_=p?o}9as09UQC?V-6tUg)#O#CJ*Eo0SSu=8se+iY1vm~=I)wW`w{G=u&6Ndtb z51Gho)du_BuTW#iV00BY6qKUVZiBt-8`=__{$e|If?M*SEHjG~x&vPJZASV}fcAoV zvM#2+|9Cr<18^X=k$fb_e5KxeSZqtxpv9&GCulKy$(^)PCpf4m_`76B>DUm zGy(!5v{MW`lM4H8lWV<#j#obOSV_qn|I- zgTbZ0>7wPr99FYXd7{Y_j0(Z%xv0{cBW~=4dZR2n^xQ@XMiB- zaGHQye~s+Q>DQ;Ow;YDXacDzX-~F*PoTf!nuVv$Hs+pE0AO%v%W{Awmn~7Pjos}ok zNW~J$++38XW3?VBgEI+?%1sF{nvZBy1GpQRZ5a9%Cv|F%U(TCR$_`mP))Az7@#a_&`9xt@56^0g3wYEZ(SD^eW@#lZnrZD97lyn( zEA&MN_vwuqnzw=@Lswa-t#X^A_teLKyaGtwWq6>bSN~3he(F|}C0G*u1T|&Y`2G2+ zLtzM-S)Dl>7!t+)S;G-c1%N~dJ!c0=BUNExyHlCKh>dVMks6_7 zMa4`i^9OsqvlIPjYfO74xuDuieN;=2`wk@&)1-6UaIQdyv_o&7L~iayWDh+l4NYZ5 z;a4bnA|dPI$O^oF{s55Lh+^X6{{Hg^SaTp!Ge!Zm*2^eHK0p<(z^W9XJfpR>6=3f3 zw7X!_$^pVlOFiy2U3zAr9RPbg=l}787xfjou{?eh?!d4`HT;7UeaKYueaiHUmQ2+GSV04kFU z*s?);zEEkj-mXaG0lKG*w6r&%asxFWaM1Z$YEf$ldUYT>xZ@V^;D}oFjgT++F7MJM=C|w-+Faq%~7~}9pS-?g9!3w$s^Gt(A57yDJ=;(2O zt(xuKT?_(IkcmA`?Sb`m;rAI(X!{_?v^6AlgNScKc7Vo#vdx!~Tg0d)3ZX!@g z{BtEGBFzV1BV)9%b`jW!?^VC;=ny2oNt(QsP#rG3&$HfpK$K>D}Y$Be`bJT$a~>}-C(1#8RAH5EX=1VC{(DV&RoOF~d! zU@thojk>(%z!V)cVg&g3eK7Z1fW8Tej1M+8F#t%hMUO1r+}bMP5~~5#hZsy&z6mv; zI%SQ3;M5Q(Q-Tbi^X-(!CM1p`SyY)H~;J|pgyu4glS?+s#t0Ds!H!?CJ zhJJu0O3TMA!ZVpC^gOZD0I;U%g@s{*5lBH`cp+%F!VMiN+c7F)7WC~kv&cxV0DyaC zzD(pd1dNA)f$QDH3NTyH3glX^Y;6eeB>SM}b`ihLn4Odf9pfJ?-wHt;@7Tm*VV*yh zp~cRjqLN zYQ3Ad+LOqw;f)0IleMl7&Gyev+1V%0{(~IjQ#l>(FAoN^&La~5P(q8W5?w+I2521k zO@g}~mosuC<-!j4V!9f$f2sa zBA88J;C$#AC4?S9cHKC>8Gzwxc_NKs@d;lYjwh*sdp}=%GY?It;Db_W2<9Ees2my0 zAWuzwaW~~o-pFrlU@t?y-W6;}H!!D3|0(5_bU`z@5jOVbEl=dr6Xz#A7Vn^-xZ@|R zyHnA>DzXUt&{KVwXZo6|sc;lwsNpfBepFN?FX&u-qxiBA04k7?kumH|OtuaW2?5wh zj4omfu*Gm7@3gx74G@V$T&K7Ey`Xp?OO#fwHSB`l8a_BUNG3f5N57q|?IS>i%6?m7 zzz=|1b^4)S9|oCRKoT&xj(#>bS^rGA*;fPbpfUIcvq3i@Iy$S@Y#Fse=2wVBmZz&U zqjs15zMf|S^o@;MaTyuaf%RH@BQ>>|1hgl|RxH_6$3wZc zaa_@2D0@-8F-#b{u~i0RXY3 zRZK{ENA}EBFw*j1`72RGn0%0UJ_Kp({%jJ`Nalic>wJwr3UoraUwxXv=IJ`t+I}@? zS~ky?)iTkQwy3hr&F1XO(zVPGo!wm~Vv>XrP^C+GL_gl-xy6T;MT3^&>*M3&>x;43 zc~eZWle&K7i-rB@__$Rb_Mydx5tRMj;0F#gq`jQqNT~PgO{ljr-_-N*4$A4k3^?H` zc0x>%3NY%-%os5&n0TX4f$@z`SqxV@X(TB|bA6UzeF@%C*S{_RRLY=TOb1GMt%W6v zQ-{C@L@O&Rt8(}*<>*pBXbubhhWemdiC9k;J!c=kgw2^Q#v-XdBJ*i3hQTQy8~vs} zHDjSC^Wa zI_6A`^?uXx~EJZ85&}%zQ>M zqi*s%X9NUOP{*H7Yh;NAoSs6+@HGy#*Pz)Z;$^g8`Gk97Hl*bDtkP-ueG2m!?lVbM z9}wspe6@Ugt}wXMhS>kR9FD_V1jI<(1U&xEiIaSJiEpD!We(_=F=zG`r?g6=*(pM= z==EDuCoe)(kT@^=91P*7xHfkA`l1# z1qC9HXpvmDq=jD3-Qjzhp^NKt)F{11zQZuf>YZc(uTDFxI~4MEB52-)ROaxY=0eD? z;AL_m&&oe!ywOR1&B#KAE=Rq_@4u_$0^4oC7y;y7JRW~Rfh(bmSEn-sNf1*Hyr%Fw zP-}M@c=67&)38A3GT1bXt(|r*eiF9}7`b#hUg^h;e-$9pI=?e6w%SNtQv)B2N{JQ9 zF0nCiV(x}Possny0A(MYd24huCiJ5?R9ZIuCe&4(i@;m0$;k|Yn1jRCnu#-S3u-$t zYIAH~=IJtyEW6`eTxM1F8)%oT&$uzJ{6%bD^QSOHt_;lNccKbiYWvFVI0J6!tIHD$ zVBTFB*9aRZhF8b7JO#3R&pW-C<2OhoWp#DP5O@|DAWIyk7+qbuC6FuYkV3(3aYg0j zq*}SN)swAJELg@?-MKS8EAMpl+lF#3nvCC(zW~^salk_RV43x_cM`enx@Kl(F6r}& zC$SZy`5O>8 z)&E$2SNT|*tqIDMN_i@T2^cP2iJqB3HC#>Pg^*npQm?}^{92ZAa#u^FW)Ro+l8nBJ zu`g7tLOQG9SLa`^S=lw_mHw;^hveCKw3%-9E_8Or+THVLcH*ekeN_&NupA7Ohl(zr zqdJEwoel=a80^eUP7nUFuC!wZRVV82h*bsz?HQvK692e40W6;jFYnE*tgvRk`bpy1 ziV1iGOA5WCwrhC!GWfO@RlOV>#vF5UBE#1#EiKcMQc#(Y_hqyAx=j)u6!lI38vV&_ z{Ax5Vi7$JA&377dzuwcN*eP{-yql{l$qF;043E+}vM;H;pZ0fFR#sg2Rk4JgpWXpp zSNC({2^^dzy)Aj{0@4~{;I}QfG-2>y$zT@nh_Y+;Srl?AtxIsVn?q2>;|Y)NZgK^U z&$wH7;RBIV-ap)bzOcC1-QS&~AEYlvPB^bPQ{%7(>HVw7FSDuX>Cp8)sjW?Xx%?MI zZvG>f;mlP`GjnrSMk;UB)O^aRbb`xb$sthOv&Fs#?jA$`=#mo8NT>0LwPhsF;sgGu zwR{Lb*{~WKk_o?)lG2`)KFnKvl`Af}=q(t8ecee_@D>xK)f5kkg5#x@U3D9SR+om7 z&t&%8`nhTi>-i&#t!*zM8t6ao)RdJyc9Ic>K3w_SG32lUhdA<%HW(Ipdsu7|`pp?k z>Y$gHtn7H5mzAbwlSS0lSXE>M1Q8U@HMP-8ig6mnbh?LElwvECM`Iy#w>3kfMcz-) zpZ_!lE-k2EQ@VQMCdHG$tag2-?AC#e{1%##{G9b|O>}W~q|mL?%{cwl=yldj_DJ=B z#Zee><=Kb)XIy{7im5=IDy)1V?zmZ_v`zs-cq{+!;ewpnZuXbf0o60pbS3yIjfub6;?c2;(_xTKyJp>}IE{+Dh zX^GswUuTt>PoXT1w*2My4lf~b2h=UC$iKNg3<|gGAKP1`ww^%KZwm;R4VyazLR^nJ zE&#iOoZ(dHAPLBY=l|*ObY^lA4Ck)6C(lPm+patkFHm;By8?S~z|t{;Q<)B=6$D~G zO1BLfx^c6;l$T{s*NI0NzKzSx&e*xD{vo=vNXF<1D?#f+v65=JiXN|%VD9#Y3nAg% zo{Pb`oHX?=a$GWo`TEGHcG-v%mM z|ERhM1_e@+-eF=eiJaCR_|me@agUYPOBJrl)dK_ii);cxO3(BkW}jl}ZF_mNb6nZS2h0YKrF#--6QrMQnD z1COSAx%D!r>lDHOB`;-yp#G$mM7XB}0 zZ7zE|zMCjK&77FMKfu3{bt)8(XQH@@%`;Gh-3)~MlP(@06^tz`0RmcLDCDltW%t5j zLV{A{>jj?2_hn}-l$Zi9hCdA==M1zXKgVf}*`O}uoAzc)SrNZ4H9|}y4VqmkBH(^i z1n)FFW@b5ZA}Zna%%kn(BnF^08W%NB1dK*+HJkZyrG7T|xeW$UTB#+TOKAEnPy9oO=H@kKn#gh`^r0qx*i;IT$u69bHDxI;- zS1PHkwCU1}%sl8RKj)A+?t3?B-Q2AI04vIW7zxp`KL2#Bh%1?sZSl$2xjv~aAi>|R zNUEK2Qxe)oO1S+t*!^#*naHfu-E5pg+`MRS0OakBs(2m5dXe0>bR=Q_`|j)>1tt?XlOi3muZXCA2wJ zXIq!>17Hh7is^n0XfprQ#yuK)r$Wau)RiS>a7Vif<>x`q-dHRo^~~}pMapxc*C&^GUi z=$-#q8bMTC*1o=VOf?pu5;Dgd)S+pp>SMZg6GR<5?$gY;+%ocxcjtPmf2hAmPix!?OM!O`uIu_f+w?lPsHY%5Uv3{e*fz`*vC)IN)AJ+W8f9US+zQyyj$0U;eiglO`KBk722330J GWB&~oY(3`y literal 0 HcmV?d00001 diff --git a/tutorials/nlp/images/spellmapper_inference_pipeline.png b/tutorials/nlp/images/spellmapper_inference_pipeline.png new file mode 100644 index 0000000000000000000000000000000000000000..07d85d2e2295c4b4005ab33d26fc10cd56936a39 GIT binary patch literal 146148 zcmd?Qg;!hMw>BDDqJE$;3PrBGZ#pg?i=;_mLnifeFpm*Vd3#hu_TH+_HSJ?A@j zeD^Q7Ym6i#J1c9iJ=Zhmn)0krMR`e7Btj$r0Dvm>Ra_YWc((}vyvcg^270BYeLfue z0b{Q$DF!GXBR+tBfi)4869oXOqLH8U;Go|Te|%N92LRBz|Gr@QZ3+zmfY&@JaZwc) z?c-HMftdqRgtJ9VX$-S`VAa7l>1<`WT2swjF&a8=NpVVZ?nN|*QggNDpL11obU1@k zOO;hXC6%1W{K$NJDvZ1cmDlxi6!wuuNIN8b{^lv;eiV|?31J_<^tisiH2FL2|J6uJ z@E-8~*A)Q32Mv}L=l^wWAND;E=6_%Fkq7+$x|@#Y%M$_4{|H6Ixv%Wu?f`4$@PMby z-J5c)fLpcj;p}`%lIP-&%WjUIIWmK;z$Kp~)2T^a5QpH*Q}Cqt6y1 z`DDl!>*(BzzBS5y1e*ceU`^#_JF_@(08fA4vfw`E|3xeS@JFU+pz+$KvwKr5ua~ky z2Y2f=SQzzpVo^mdtYalHNZb17iim5IFJBFZBw9iC9$-J{WB#Eo6b`@9_2~d2$Re_T zVGRKM#tXQt-o8jEt)*9g#Ss(*3B1iiK#7ezC5H!V`!n~p+^(qQ8R00M&=N>!j7MmVPmjKpjF)Up1J`aer_~ zSN#@Y05$$sVq8oj(Lw<_tjZr5mGe_iTPbzh!6P+Fqmkffo0WFSIR*R@4wGL&V+^Cd zY;Ok$0nKnBMSS?sJDmALZ!q!ZUr0|}!UblA$~>hqlqe8$%RQ%>9n!V6KZKRS0A^gC zBJ7`cS4UD`?ZRX#2S2H}YL)~-Wiem^v;Xja7lepH-)<`(vD-u6r~u-*kXucGJn+@{ zY9M=`7vS@bw78^V{Rz)~`)Rm`N^b9ncPS}G0cN6C^#3k_4@XYco%tsSm3~u&8*|3% zo`P<$>gWK|oPA{4&_T5-zo9!;FaEi>1b~Cld!QprND<%<0u*2Wv%pya{Y=(jWt@k}DBq&gD z-2ENK)K`07=O2UsL=I+7*LDm@PP2opG3wx zcT{=H*6;m*rF<_(q~A*w839=*E~iLtV@%q{*0m#D@1U4V5Gn8`L4oR&&wAqIfsXCP z+P7B((sfKMa9<&0okuQ;=GHKCO`zHCd4Y0H4y^(9@zZok0~^Tc)I)ICtlAVFpaJu* z_D$FyBMZFv0R=1vbO#CnrqmWEqgAh&JaIo=LPh8P&gzehvPWj3lHLbFU17=Ij3biE zyZQl~{^UXCkO-v91M;-Ej`$Pvj4Wef;~FkIx;OOt7^(!eN*p2Fo30{%1;;a z^s<#*onWOer6z`yf5+w<_}n*lE$KiU+JCrS-zWIxI;-u6O;BFIkBI;ug^2*{TGam* z=qDzf!LFB%g>TL)^GJrMfGok036(qrwQeM{o>a8IqmkrquYs>p@?}Yd@;hAohqBV+ zcYD@EzW|mHp{nywyMk@H{jf69M9rS{>fnf@0$pV8ZK9KlgyZ63bm*V}4mtG;Rh%VV z+9i(hZ8nuzHzV2%V3iS?>bbmGOCWT`0+p{J(+(;WSEQ#MCPqt>AhSm~vF`=6U=4r< z927bKG`Ba^pni^Pn+KWeJG~N^eGe>Z4?zjr9{U@aY8-OtUDS>4Ysj9~yCipT#NkL! zYlz{=8N(Nk3qv(;LIPhkd+16i=2%EZE@Fwyu(;S7m&pZc6tjdDqg>Nhk z?Q$G&4WAl?_XB&+l{C^WgCVpS#+0eoKkqmZOu} z_H{ZMgdNrhYt_$$#b}1K)14|wFdwbumUOCrwFTbWyNZM={WAwfBw-r+))*o$Ivr7GIB-Q30sR2|HC zyd<@uZ6lSW2Aj|mL@5rtZ~?D2>{(3djCihiB|RaWz31RCKrQCKn(kmjD(vN<9`Op| z{i+%S-4Gp67Jcs;D#LXUSLXiuJ|gpH=aYFgOkVc~oCi@^Ew8zjZQmlO?gSKg>d2b! zFj6qD>Rlie^9GXvG?3FElI~x>u~%#Bic4Mk%YldrlN5nu+4|Y}xq6+~8UZfzL;QU_T(B_h+>XBBccM9<{g+MqowzxswVT?2 zW155D3evcNI$Q(?FP@KF_Z1+3i002kqLAlTBgEQ&{GH}3=Ym&^7_M_e3Zr?{IO&Su zo(_!3p|G=}`o%a!8{pO|TO;bY|KpW0WMeqme2$6VD$rV3rO#7<#1mEUvFPlI*XiU9 zLZl9>ByIu`(2;(PyuQjwN#LMLstR36U0%rxn=WiZM*G@A1vV?;Umr^+fZsWpsO?$M z15KsXJ-XN}OpWr6=E&bL+rS0Dm(ohS7J#7v^erzbrMb^7XbFAj=~S-mR0g7!3ZVD1 zo|6)isU)%j$y&8gBl9;d4WdJrf}DuZ`wXirI;M2`PyB)BypG+c7;y`PpXW3!=r@?v zl(_9z!`ptjyrCTqUxt`}Rok#q_)1tnC^$Q227v{jBw1`Nrx zIGjil5eL8ggC#2z;~x{iob3;)pX7r}cdS`mL; zDj`r`#5u+IowyZn9QjKRLs_rqF9Lg)HWJ8GZv~}29^~hKsT=<=Bdu-^?4xxp+ZM^; z(!Bx|tIUMuo6|2@GT7_Ji2){s%@XQi3Zt63hq3SUxm|~1TPH0*rO|jlL8bA3QozA& zMk}6Vu=3JU@?Cmssc-n}Z;rZ^!w6mQHXFAIL?PH7>?miwRN-E?NEz7D>Uzox67;or zJ^ZX2+T=&a8xO-a4mI-p&2)^v`NX|_c#0`o>1Bt%e8IdFlV|h(F)Pz$*EE<%UNRN6 zkYT4fSI+JO^jvu|z>~Hh&(p?~K;G*;)d{V5-XD_DJV5(A!QQBLUo~&Begejouh~W~ z*4`m)iFl1VzqzR|~g?oQKTY_W7Z<{UtR^y+Be+FI*y$N^2 zbNMyD_=o#4YGb*T&MEu%JMCX%Jr-5{82Cv@wFLhH8*hWsmh@h4GR_mGe=z047S3es zXX}qwcL*YqNAqgR#X9TI?Y3h0uFno3%i_;2No_`v{)rn)n3BjEyNOY6;QhBr6GHga zatb=w_*Jy}RNjd89?E?~4P55&KPxga?`|Eq6sI`OWxXQk;8fJA&%D`&>1m7-Jx3}z z-QcO$$#ZDsVYYIkYdTSCo|Esi7Q+1S6;`X(HSRNH4KRrUY@!P`#J8@y5?9Iuqnt8} z47azj9C_n-gj&dbK}ooAYkL17@z4>t*aYm)B&&CVM&$Ab4+o^U+B+|5xVvM-4;xv?I?K1wB)gcX`_DAfD zw@DP0B~8tIz%DLd-yLo^`&_k-z>i&hcdAQam=KAJu!Ootx>kQEx-n{}0-x3WZ(!H{ z&QO+9Rcd4Cdbxun{1d%9JQ`sVQv2vT5$ccYZoL&*>%tSg@f5C_Zd6;hjgm*p5Af;I z0(eAN+dH`xAvX6|4~E$21w6{Wtw6abGNzSzk($4!kp7N+m9|oe z6@X@^6kpPOF!zfATNrXF7!s~A1c|!#l!~pOX_Xxj zkM5q3+t$%cXxx_(sMe;x%u&S5zYXrftF0ukjnw8%)*C(+hlm}mEEqhuf!@ER)b^{q ze%U@S`|2F|o=~z%Uc>c6MPjDjHEX@ZD}M-~0j+$4hJPFCIhIu}Z%ScKB~$6VPAj*J zA@$sXRI>hVVi2Q<=yYGx4OwSpTmv715#k_;JKwz^eMb`~xLL6C0av3zj4 zfcsvIN?N#=(y@6yPSx=|!#IHIaBFR3w}T?cV*LF#Zh8IIIUbcBK}OFg>BblPeI|cJ zg4@%B4919`OLDP4Sd!;-WvP6UyXQe419Pmt;t62$oSU)#y5v5~iQnA;V**Iv|H~ym z`$9uioo05pJ}`_bfR{=9YC@>K{3WRH{)HOJPHl$&gccfE$-;&u&+dcuAYF+5MokljD|1tjA z?Qm@RL!3l>vN7WMbYr^>a$s$evC307j>Rv!Alm*CfpHu_B7r7p`fVyJrW0L7+n)WEw4MvBCU7Ldjx4f}&gaR=g zm#=u7*X7eA*ILkB zC|Zll?aLP(Pd1LRi}MFLl@^+d-2KLMHUYoPdXe@5t;VxW(Re)T=r;wpcBvUwu)4PH|zTBta#HZ0=!2Y}cN|^XCzgbKP z5)kc?}ZTKOky2ja7j9gYS1B-qsA7b*|%`>ysS&RNn%(+g<`MO|m5LL+~Tp zspzJBUsIprS7=DtRI``*#H}(UamdYGDX{S#;NcIkjO4FAOuAx@a(v{RvU0OBAgr3q zAG5}qgbmCw#kw(RoDEpoqvX&MT%B0GBzi?MQ=ei?ph#adCmut=J~qS6 zs#k-S%*i3CZg-{4_I`35ZEO5Ii8>Ig>+nZ|;!iW88id8+k&ac>I=|2R6;fYk4FG^UeTqEo zReKg<`c}x~Xs;Uu9VqyY`!w1jvQL;d!rqY7hwqer4-rx&H#y~ssy-yuPO0yUCi=(- zOU@cmJm%q`rqa*NHepJkUjB*K@fW4Kw9Cq8FS5kdui<1eGFB0aCKgW=oU3m<#c%Pb zzN^D;QK(l}u@yW2JpXM2?wA+jm{j2%j1ku)JuJSr57?EB#j3-_2u$Ng56mQa_SCaZ zu}V~$b$lH2)XpKJk^;@KuD%~@Y_H%q!OTDihs=fr&|1{)cg*NZDl%=Q_T5vFv# zbHG{1KRe^>iV$3BI3Mrd{k|2jcKLPo=J9GL*!hLBr^uzd_`#OKXdL5Kq>mCeJVOe?t6 zPRuEm&FQG>I)X@HCHCHAz5Sp4lldP02isI;5x4u{QMFyiZvzmBG-MB|F7HBK*++%< zVb$}88pb18C#E4YQS%jnC+}kLAet3 z`xeg^=Yh+YQFku#$WO;Hk0@}TB7&I*#7KbD?*>OSj20S;>R1)zCt_(G3ffpB{Ho1W zW*s=pHEmK^tb-2&D$Id<*_e9ZarpL|K!%n9)6+Ii=uKiFq znoj*~yq|%5rggR?9iLRNQ;aLkmrH#6K%fS9U2tjM_gnC-*4RP#(_6#yR(Hbp)DwBS znniBZXCHvSaPV7@ebD|*I^L1Bx`un})u}OGAy@W_wb2$ETte{XRsx_YXT`1W9(E%8 zW~D3^gkCZZm8~I~TUo=(wdbgHO21_+`!F=0ZKdx{UKr|sa#0@Vz*c8*1XQF}HXFj9 zODuSU;5~yleKbY6DXe{<@awWn zo*U6u%!eFrix7DlUIo`GEUT;#A1pPjjvZBwquj63Ii2wf@w+BoI`J=W;aWP}Z!i-y zOkMs!vtg)0V#j^+2mR5mr^d(BD&+!kj1ZI3vnz?r#9RjxtB!wkv?RB|ND7`xF%7Bt+)>kmy89?K-MkZI#hwF-%cV^~MuH}#!kG`a0ekJGy4+WdM&~a|G$E)&7-HJ5>ws#^?8o5}|@h+mZNue(N>S5s|5B7|i&tyfrXok5u=T-YE5F8Vq z_7(R-yJ=Q3Xpvq!9I&J0NZ1Sz;?ycpI;~tPoz*ybr~N8qUSk^=6e;0P!RfNbABitF z?O>qz@Of@je0_({Bk_B_jS3+zpN6_}fHSUAms|sXlG|2iNs0p_x63ED{M4U6H#w8_ zI&g)ah2nUu+HJlv(R0UXp~k1zKAO}2aZz7bQ%3|BavEt&7*9HuJSeA{f_1X_4Y zKY$ibWel^L5^*v0n}yknbK59~{^`mu(9zF35)o-t8Zud;Ae`Wxsz;7?{)JzSz0DMePPz*^ldIxcU=aWx z{C_1Hv-S+I%8ItX)WWlHrhLvU?{ih(Bz4%Frp9=S`xXGTcAkK~*JLW%PcPGasWyGF ztKCY{E`l%T(I*JaoJ%YY7LS~X^Y$q{hRuxX(n$R2v`;rKYf9_ghJ!&uI4N_S0erS- zNJMA|)bB#Qp`m!1ATu~LKc{#!$XIefM zj>I=<6euGBRe#?ukP&;;e)RGI4iM!+uf?hC0il1dWGOK;tcBf z{#EY4$d3Bv*j^q}Zs&YwWfKk-U|MN~G&YZW_h!JY;kRE76N#-ZDv1+=$)fIe#`VEL zBDFv@=FlDF1h@xy>swdW7`n4`a6Tg}n} zhF(#)kx&vTHN5Xhh;~JRbQ83Hn*UQF7SuT}Q9gdHhL$}CA4=u+C}r&DZ6OYzH>mD{%5NA0`DV=?ZjN8(((+os1 z372A;p*A^j9+E8_{w5aMk^VW${HX;DXg2ijvtC5l^~Tc2$YRb6;#V2bmZ zZtQ{k{vyrCo~j9~F2z}6u)X-uN{2Y2_sN7PP)-0#yq?z5#@zT6*RVFGa%rq)?J+TK z)1W~pX`eL8xM47;LlocC;#ozNX^H*I9Ue!I(RPEi_Qu4Is96Qqro{6(o0K!0Xzk~q z*mO6$%V;f{og)vM-xq;f_u5RF4UzNVtp<+pqnLSkqo&jPuj+4I1Y?m3o+1yc(!=Yr zDRK5G{tcQkAe0zDRAL&o{KI86{XsI%r9YxlMQVe#>VK<)XW7ke{S_V0&*P5JV!Loz zjS@Z+%Yqc`Vv2MIHxt@E7aeV&nW+4sK|!+HY1CKUTvmKDL~6nGI>&V8KMFubfswlu z{jai4*NYtKG}Up=1oCM`77dfs`)8e5-5ICwbOv==IUz*1D&c2YAP>(9m<@?kE3qn4 z>ZL#-;OZg*e=I_qmSaKG^mVIZ0EuJ+CtuIyXqN8b&Y)&t%MEsO9JQC9$M*-K z10~aZ>+(Vzwdtbf(;LcQ&mok@%psMxf0Mn2o924C;=w1{HYQnOL}fX`++>MKZ8BLJ z$J)}`CD@0=L@m4A-#$ffY3?jd$ly5g<&|MN`X)xh-G53W*v*xn=W?KC%E($hDyvJN zVp<&jC`LCbN8PsPIT=Z~LlrSq9{T;^YVL#b9`_RrIOw(<(M2LM``kG(7$xsuzDKsr z?I`+}9massS3AwNb9iO)`DR8wd76y6%k8(L%>UQ7iN}F_w}kJ z^9k#fPGOd?xL7jt?BTkOm}^LD-rQYT2`Uy^a$%f;pqE?Rd>_s-18wACp)D=iaPi&*u&IT}?N`vWhtS7t+IY!X7S(J0_;Jnl2?IhXU(`y7QsRrm+QV z+&yDx=mG&bliUHpR#O;u76KN`qrVZDEv1cYMDAfoog&&WVAKJEIHeQfIEBo3EG=_!vBt6TQNYe~sj zTV$RCUamp;Q7&IG*1rWux}V%8_fiBWeaIPj^oox~8VMtDM`hp13C-Pe@m#jzQ5Ste zXa2m=EMV!fOhJAtHv=}^B5fL%ZgK+9S3n72!2;WlU+YCdMarA+~3Sv(d;AFFMOwX@v?KCZ36J-$K^8e-#6zc#12vh6anvQ`OdAF1?t zN1C^Ag%qXak};d_y)1Ou1s=#BeZRs!T>k@;^`+TH7WB~UG46Y|>y5~xRU8RuWZxedJ0BESG{Y%%=8p`s)X=TD638|Zqp2g^?+7hgYnAj^=;r3^&> zhHg>0>sb111c~gSD}MR1^C?gc#fXFNZsn4t$Zu%*XwR*hh6C&~!-C9lhC}M)cZAXs zXBn1bu}!J-{J=Wjrd!i{L_i$oH$O{c(QE`{%x1-jiX*TWAY(gK=o91$WYe(k6kw-U zV{e;WV5N~cpV6`V`K1o=k$>WAmXgBg#!}K*bfWR|rON17^Lq@}ClxlZ&*y9!Q>Gd2 z8e6W+0v~3MLHBYq#ZRM$w-zrFZw4h#^>92h$&eiNCPP4ba*H{$8yY9NC$(3y=nvSl?JEp-e_h=B**_?qUGg(6|ym?lyN>btw{ybFMj@I>EeP0t|TOISSF)lW5Bc+|o)@8I8+ZxZZ{?yR+gkcvD zjj9KFdiG$JxzMWi5a2aC1ZV^-c|$2n_yYNEukYmm6Lp@Htg!mUd~0fU za=%)JQRCMdJV}bpCk7W}q`XZc@Fgx+cllftx$jUc=aa%*m-Ld|b1IqD=}pb)BS4&! z?33&&W_p?YB%$_eqN9{C3^rlj_457 z-%2X154CidrLPjZxqIDys4YrKLUKTffog#c#ou^@5n%8i7Mjn8sLN{DNAKscQgL;2 zHs`}_M1_nhba=_^VrD#QZjh-&pq^14d(Io~h4J)K<95Mw)?zFE6$WpTI=${7wCM{e zMoGskj8c5+vg`%XR7;GE=wd)2G zIp)@%NqN&e4v1!19+my1IF^9N)v@l^tD|aev|%|s<_XCS{)-;GWWsXY|`P?Py>dASVoeF8%g-Mj#*YqraPV<16$Ao zI{si=m=*Fd>CBYOtL-D|33)yk?$yhPeHXacAAimhuWx*#h3fDy-wB-}l%iGO{g-h! z_aACc)@MPFE9?ZmxunT%r6RL<2~^GMchPlXQTE&57G(=Vl4fPvvrl4!{T~E8-J%Hj_(mJUTsny8)%!99Sr5)!(_R zEPsSDELv_}h0cj~bUap+aQ4x%I6(;hb1IIN@+tC;a$;C=1x?YmlSM=v(r}T!P$~SM zvS{-Fg*|(pW%+I!;+Y6U3z_6UwA@0_VI^Y>YJHj|qVvGYpNGPgbDXSMxaSD@Eb(8G z%q&ma1rL#e!!s(BGOCT(SkZ?HSwfi%QQa8$BGly3N67PsllK~!rzv+c z%`7|E;T*q)&?=lKhmIHJy9cC!c;uaA{}{X5lW^KO#yO6pbCEVBscA=MRCG2q`lgvQ z0~yxgXA|+PQvtMwu~C+LPO?4FA+#fsR0~BjsHXq1IVJq{?%8hqss9J17-FB zJpQ4Y3KBzE7iF+zMPH#S?m({9!I_ZPCaHd(y8a~ac!lv9q?MsFF}UK6ZY-DD3iS(A zW)mFjEa;?fPWAc_LSoNc>5`N*f7$#Q|8=KmQilBQCTlob>V-2*?D~n`gt4NabFb=# zumr2BEBbtV6@j{0a{;+@faN29PrS$HnGvb;5U6d7E5uS6yX)0+997hGjL<%GJ~k;~ zy}R9sX_4F=Ta~BKF4j(b(Z~~1nM?E(8FfK#mpyD|PyXQ04i{elsd5(lsPBoS`qJ-R zud$H zMk)}Ew1{}PVin_CqLMdZh(33%3Dhz+JP~>y^~_gu=5B@cnM7q7*83&hmhJ{KWnZK` zN;Swn71R6eMk9z+*war_kfgvFQql8x)rSNk74mv|C;7qS0Lo4!qn)(!8bkm%VEnUP z)M?3Hvlxr-k>`;isAzJRfWXLjygh zUU(X3uj|G7RQy4duxtn%wxf=8#fle~{6-jVuskh^ZVbT7sh~q;3UrZl>&)2tg zp1%E6e)n$#$)A%Ajj&dsjBAlQUXDp$%Q9Szl#$O^|7>1a1^!9JoOBgdsl4|6j7Ua9 z6%E#Z%|#oJfnYD$AzYw7DUeWq3g?tUh_^#maG8DykuU#MnG+XSwRzB)IA}BZq6^u} z7mp7xw~_^6Z#$aeEo#X_OMFSiG6$!uAhQ)NqHun3+qCI!L5c2A{ma?NqQ=`BxhB4s z>NY(|)P$)QR*>yVtnN+zXjD&h&F({vL9q$4K#-*U0(c?57)bOsKB%0F2&E3!Qt3LKW7n}f>C!u7x{*2@R(2L+)($K z9ryRH3yV`-6z_VNX%0OpofFFL$+~TntmY?@S#i9P@{ST+^^0_=lrlDp+8EwMGZxwl zzVT9*+|v7vP^Z_4qPuG>g?k*Gf*h2TU5Y!gN4!l2v4k`LAD5hV!aVrt+5D zgccE~h#*u>w!o*8rOuqw53Ct2g2>Ef3-G7QwIM6I0gkqp=8r!v)R`?kbC%2yCu^&U z%;%#Tj4!Vpxrl)+FBdIUE!<^jl7l%vQ42$hTq?EBRylM)DNnl&yzIu1huQ6o9K}+x z%V(53=W>L@A=X(^se|Cp@_K=J`Rk*p?CGV)LvOkb%GaA)Y)@Vt>I$;|Lxr7~hSJ0N zY=GX1bC>R-1e?jqcQxe{+8}(RY(vi=G1hmkNv0KFM`nvoVPz~IA>YolhwGbf;hK}F z>hrteK1o0JsSJ$Xp(4{jDdOaUxtz1pJ(1F8d{+2C!IlkH+VdkcS)(Q4d^!O-@8gbN zOr#pX^FP5cQzKhGnJWQOCwM@dAR^jDN@W^$r?s!ErJ=9&%E?t0yAg~HG@nazY`&zR ze=<2@aXw|^c6+3cBy6|xCn-J@DqkBAbbAs|JzLu6FeoNeU&L{cmhENStE|>{vRRCB ziv*{@_8=t9ajkdS)}W7YulaM=PMw7a*U8s-z1rZQWUAvzo?%OlLk?|ACJh;jp!g6S z=ROID3@OZlLYnIcvl0I9;&7@^P?z*(i%S>>{M~M{zy^`*{w%Rnm1S`?qb2t6S7p;|OG@ulyfT3#pXTcpMjMD=7)Qu%uk_Bvsm<5^Dk zr*7GYPUPiQW+Culighn>4}?l#@vy;nppYhhK2)+{rR#QZR#ZhBjY;+M_9!GiNzV?6UniK*i zYfu~qmDif+J+1}R4>k+8_@;UEJ5^5?Vkifc?GhsA8(6k0Ak*s#DW#2bB zg%u|jgj9iZuU9+=pYs&u`zfFt`Q{Ln3T=r%B^He;w4E?#AS8tF1*x_G$m$(SkZ=-v zze)-2YGrcwfA;=y`$RD5=mAeswmZ<>i2w**gGL1thVOxOb0S-eFvG!$G}+nbfraJ~ z&tsnKmBQO^_0Kg${;{~}JC2sW(Hm&vKM{GY^1SsdICt8w*jNw_`XsFBC|v{5C(9=k zC=53)kJK>C60|?eoj2ME7L#vxeMFn}8tlV&OCPw~8{zu==|KJi`A-J?u@r{tW$r_k z_-P8{MV$b`}nVy@Xw1{ucNO+2i(4>iB;PspurO^&(xY-k8At z^Wdg1Aw?QCAqP&v$2OHc{;ZruGw}&tsnkI06aGYvQ1gE}@vW3iv}NDDQ;}@gzoE9E z1A@KVB(g{DTnii#IHLceYkuLi<(hkkB$1i;u}=RLdX;1pERuJR;RV@G1O~P#v!YNWGP1b7)>iKrhQhVilGEK;p_k@p zvb^v&%Tlvkl&OMBr+qAkS9NKl@5oI+ta+DkVrgPC1{vmhcB^$WLa`8VB88Cksx3y3 z5l&7-1fPZe=c0_-YPt&K!Pb6D6>~nsz)<+Yf$cvKw+TIH7$cG1TTun#%D*y;8yK?Q za9m6+Mkj?ILfI#mKy!fq=c(J@v2cNwX`V*E4xaDs&UB@(=h)DxX+Mab^9zHMrru+H z(t)LC9B5CmV@!>4kM1o0{jv|1nrzS1s~U4E51WwZcF`7=3P*|-%zULN%LyZ6D&gKd zI$$?unEEC6jY!Bt$7l2%2hr&U;>q=7vZ)x2`l}}f(=|NV5SoqvdpVLE{^>T~ruUpm z4Mbc5o^-&)BazI=tWqztEPtYEu4GX?CT5Sk=JI?L9HLMV`MTzj41T$LU>BBi0P=-J zpX=}5X~yflwhZbakWG_GpQ{x`ByH;+Z5N7K?4!qp1=~(i%NMNkO11}Z`IWbgbbq&R ze!J@q{ILB3_xB)9Gw9{n?= zp3Z!UOT3qxnhRN#9{%}(bQ~@LQ_VCUAA3HBb!ucU35V!0q0rDfr@5}EpuiQ{X9bV# zhgYiw%aDq;^00oXy>9ZU!6O)c2XZQfv?6Xb6Wt|q;gr|FK*rV`vG>HO#j($~!sq>L zNr`kNwAifIdP7tW2JD9Xh9zIy?ap@|6S0T(VY=%$ zcX`HCNf~MFEQ|LD9`pNzzhE^|Maf;i*venLghx!^O>kou?!@=CElq#>x|BI#0bZ6&!a=kzLwZ6 zt_+H*wy~}mzcrvf+L<}9ku9vzGoeh5K{YB4>*ci2 z<6&m$jgWjjOTzIWBGEG_dKcRd!5 zn-vs~l>EZ9f{f(cN76$4Sl+@LkbKIyuJwbmClB3qsds`zo^Rayv!my}-0VtBn^|#j z&6cc?toIvUYK$Y1=VLTkAGJ0^E~f$v>bY7I89Btk?-OteNEJ|`X&4x>iK=#6u7%}6 z#h0Ibh6QA!rFS*8fc^Phzlfjb?*sZ3@Do7Ttc$!C(kbgBNy17u<34A4^1)?y!ihTK zUZfsD#1cBN>N*8y30=)W?oI~Jx#I0GEl}qS=hjiw&Uz`b>o}tSnBggszMR)EC%GJ4 zR{fJ*vj%B1>$=04UVw&E@Q~3=HD>ZA`!&Tm7v@9F@_o9+B~zM;;T1_$E1BqPPJRB zAG5oSSXiGr&~}@tqd0?48QAU4l}$H6mS>az#fW6=&wWOA0OY0i|Kk9uesR4Wy~sSv z6vg%7Qc4G`Hg&V?045znJL^*oM-^u;EQ8XA9T>9>T_Y;t@d(6+d>cB-IRa6cP~-PVsh zxP!@z7|NSv!N`Epbk8LNqm>b>x%*CpTn04tqlma(?Qftb7Dio#GjJe`o_HT{9qh1P zH;H{zS2EH!yAoUC=0<%!?=u{Bd+$JwH@@KV1~ z1{v}I|7hLS+rNp#*mBNaj*?^+T|ji_N--XOnEUX!4T2_>zIWBKnFy%XGVZ%+DiPv2qS0ukRQ zu*`zguQW}+*2f;8Ur8Cj|FPI?cXi|3(ti*3D37dJT4l_SWZKpn2~m{B(l3scQ9`gn z&gD+kU{WOzN@@TW7$}f>R@nX?6mtjgY7+g9)~5L^Hhh6lCIvPMg*dBrc#jOBgg4!3 zn$?9|oj$M1h-Ta-aH|uHl(zXk?TU`&X(bMo9lvs`ozb~v*qWMiogh>lciE0<+e^rF zw6<=X8G+o_fXEJdwT-0c+lDMZ`J}P(vM`TBL>EBBdt35w`B4^pn z_Y1s~hWpg<3DabnPwKe(s0OivA&K9Nr5%XFOe(H($M~!HBKI7B%^gZV1KD2SUbAmn zBRgJEnMLJDqf)HNMW2d8mR#GvdTT~(QRs0GY2MuXv{^L}XW^eza?RlTb|e&+d@!NT z6h3pu9kngcoXxJX!T`>?ICUE+Z_ba(lCyC7ioY%wY0mFXrQv;JRw4fMMNcxgJW_eZ zR@<34rK){C^af!nf-t&>?&4+1+}3}yS~s#6mXgj^^ap+B_FC9VvRkB*++ywK6S|nY zTlZzcYYM(UH2!nhoh`GfnSN{>nMnp0vf5?EqqMt%O1s!QBzmjzEkildD2|s2dqf|N zOM17!qOrRj`{wkw{T0D#%v0X{p7ETQin`NN>G_3B^5024B&6p(PiCaLL#*7dDO_Kh z2C9?Y8MMpQuU5{Z+BR71RF#%E^NSho81tC)KTEYw6q9vzoj3}&pu%7KZqMjgqgsG!(dq_eL=Ycej;%brhh zw)`A)UVSQ@6_N6VNVV_&*TG_#{i}*HS@raX?<_Hdrug3q;0Zc}9xP|v!UN;tO~s+H zOFT%U(b&6Dwf%9jRzD}7>1IdJ7AMsiqEitGjazbO1`dz~4Fq0~KL$IPpT3t?P&xWiA->MR23GkXnJJToKC@pVA@d#850jWrt0 zbX!A-K#W8}PP0koN3Q1?-J|ARy;(1{JA0void?sGL_Mqj3j47G@8 z9;{v=I1j1O_=V6@iW2(2&usSi#LJY%!xw65zBF-qiFi%Wj`Z>M;HahLqE6anchvHq=WG&M@-*=#vMH+(OmE%AHl{ zhwywX==5bhwjH|blr}67B%7Iqr}MYmYNt~D3Js#xY(~c`E-U{Zvd%KB&2HV+Z7FSW zC{P?)+}*XfJHg%EUE5OJ-Mv_FcPMVf-GaNjh2*5)THo3GI)@+mM*=hRo$@?m+@pt$ z7TGSZvOBkl!mRJQ;)-_+s@)%R_dG8r2K#Lh=bwAXF;Yl=aiY1diNdOl&^G(y^wMSE z)!$FoP^uR^FlF@Wg{D?3`cF5^3P4^oBpD~EVAJy<_sdhp2~qz?4>1I2_mXeYg8RE1 zT>ri-+6t>gzm&?NxI3jt3qMe}}v)zH_rPFdV@=FU+@K zHf4eBkLQxX>0q|&kf{b z0$PJzG34K)z&R{vHOCIl6o$_e! z*7#3RvolCpPb+4)jz8y&3(s(vgZF-^*4f3YywCv1IIFmTdP85AQ)%yzAip=H^A(Mu zzljuPf(RW;o%XUrBq+hSSuwO?+#Y*M%YAw*Hs%*xd4XBBn#W=F+Aro zGD5b5${SM2+HItUS?tyEaS9T|R|T_f>c+~wnFs~^oNqUu+o!~qCq59#7EVV(#ZcX+ z$|pja7J4$*m(Mi0HQxm|cP4r+yDRlc=vXU`t)>L%-K#fOx9^znZMCBVM@)0^9 z#8?)`QblWUoz10YD~vN+;mczg9Frq>dV?e_-=#o6zu#;;)rVkEK-HSD(1#Y<9k{%M z_W7TJ!lWGxZ&G+8jDqqL*J#H~v&II-rU(yTcysE{oZuh+i7*vd@n_M|sN?GhsT@FC z#JI3?+;wV&vC7Ug%a2rH(PL)}*-h*P1zV$d_*2l8fw$|0Sb-OcYZ$wIWx~Uw>cSD> z=a%no=ZBa03}LY@hpyAyz31Y@DobAPc*SvM@ClzH@+~^=OIoUpN{;ZYT9yyoei`ia zhA&_P+|GU|c<`^ZelJBoTiY=J?$fJ$Y*_beXm|*@wGTR4RSgKJp2F{Kdr?kF9HAfN zY-RG|qI+}*(ppEANVr9QP?vpif-M-~>y}9J7D)!-{P!n*US@7tGr#QHi$TBD<}}!F zb>-dMrc%;ZceEmq7$J`z>`{2xyZ;B|=jgs>RI(h|zFAa?-IW<( zqOP7eCvckJzevx`4A7XqXy|lIZ=9?|WI-r|ody9GP^UoS$EZ*R97$Z>#=(iDqOwkR zh(yxvFX?h5RLWd$W>mv7kyEM>g>;Fr=a1#MD+*wIG`{ysF-QY3nBCw5 zwy2I?Z6A*#XJICN{7y+b@gzQy7;j47SNwCFyypq+#=MWJU3!w7bfXbw=sbYCqz}%@ zvSanfuH{rLu_HEI>O*H2aH0SuiIv`A6VCcOXLEgCt$YO)+Y*Pbo7M&j%Ea^Flt9W} zL9fPh6m!#6LY~My!wy3c58}HRTY32Z2pxW>yfX&ci(0pCI|S(5^E3_0dn~;P6Gy@- z*TAn_Kf_FmUGrbhOir>B@jyazsV%!1;Ns4YYmNtgd>+YCx3?6BWDPEc zbs@0YpG}>9|A=k9-DPk1ieOkeJpx#JCFY?Jghx#`=h&)L03|Q(6CLgQ=*K`<@#_;q zY_;Z7Rzk+{CT6jYPoFh5fBW^-@g~YkKd-IpN~CGGadxFTF&QgwPoxrtz?FhaJB?|F zhB9t+Ob6+Yz6DHV;Q8*t@ONz)nPK*kv0M-HpS?^Ok`5r|9-D~%s%QW34o^tljVxD6Er{17q!9-oFPd1;_sblC+)kb{ zkH?0l1-W*;E1eihe|vE2Eub{VEON}35f5mxa-)kP42y1&8nG@a!6HeZf4YZLIFdGQ zX7F9D(aL|qjrkCPF-2SsmLc`#>36q&pEby&{Fu?Sa{tc4eBWQdyLoN;aYSaVZByF{ z4^mdc|6h5v0WOSuB#zB@8ktitBjbj|@7P1)$yzggdCzD04#P4b(tjDUGIoHpdX1wQ z*VTXGsLdLG+GJih(V%d$Q+#9xZ}v?M&LhOh*pE6Xv9;oN`r+wHQ{wliN~z50Bc{1+ z`E=(k&U{s}8>3SkIitFjv!WlkcJaIESpnkN{p0jpPdf%7?slAnk2fAE=cooY312jT zHBfx}Cerz)3Vio=+h5{8HKC1oQrB?q>1V8{QH$-uJ0_AlEHVhiNi66jEK4jAa_Zs^ zYuBbJ>zFxNQ}*L$GgcE!XEmW?6&?&~8fq%Po;c&hZax(_(vVDwA$3%FD#j(-UhEj_ zFa~F(jH~H7TCJmUiLUf>IOIFn!k&?PzBR-76=c97l^j_GIYnV-IRUYncrS_iTWH;* zZ_UDEfWQwjJHoMdb!hP1r?3m65L2v%Vmz8;g~5r%m-E14%S97l&dk<^-~O_7Xle4} z#g~}rt^l9Ckc#PFW5jQ@j31$h0HMDd@ zTVt6LEDv}J?sTYLMC(?p7H}VL-<6X|$t_gy82H@vBjy}BmB)CHYR$S%&5+C9IvH!| z*rPj})78S=XNj4OYN--{1yL+(V8AaypFoG*y%++vk%4I`r&;D3v8LPBOiOu+^%;MO zfA+3#!LZKbCakoP#;8LCcAq|~cve;1_>LJTCnk2j(jr{!<)KVbXt`Z~nvbf}e&zA} zs9=H8MhY?rzbeX)`v81`6OW&k>X}W2ygaMwbG7Ev%9QEI4UNoDTO-}WX3W#2>htXv zN=9+jX{|0MN4L_P!zs=O^LE@j5_eR5B`uJ;z&LWz>?x+=q|eqnvt#MeVHLFlo6EkE z?hu_oh8Jz|^79R&qwz9<6Hw-Z0T`~^BR4gXyl~5p%p*cbU6q_Vid-)0kZvwTMSn?Z zJ}4_bSi8A_$-Re|;MPwo5LvGmO%iYc67=|SFImB*Y$MW?Do+~i*E4&{Ru^sTP8A86 za+8vpPNZ56uH+2Bu7fP}C=eD!WMSFyJ?l2E&T@P&Q*6KJQDr}IvRms^suRcL^Z1o7 z=%+^Gug((E@t!YCW*`66$2Q!g}@_JlOB$mMrDPlha*D(tD zy(BSleM=+iD^Dp=YEzq+okeH$jO3GmEt&6Wl#L>vx+1Bm@$*$S&iAba2U`dg2%5g; zE2+t?;S|Ike%roUD)+};QaKq^>gKjBZ6q_?K$-af7{ zJyVGHJ!xv~_nMq>(hcO(MoQvgZQmzErf=)@Y|Z*U$xc=@=E9Bgv@Ne5e%R)L(l%$( zNuDrh495E8Zf=^59p@{R20uX_-&uMI~{2yrlqz1&(nvGSleBcMWt_WEIuPo!tW@lfS-ey-%ZoUY zZ2advO2ev=kmpaC3bG>LFW;2gdZfrf!Hs7gSNnnfF8Fl2C?!0qf_u!ZwX z7x6brosz2Bj|^G2_1%!LMKnkOz4}q25hAcQ#@>~&=7DN)MnW`<60ou%EvIX}@uBe2 z?iQf6B+W}!J(2d#_+&bN{EvBSB;<)xwamS^r~yE~CI@K3jsz6dLJE8ddCk-ht;OKDlX zGCn|Gth3s-U!Ks_{C=0Nyz=Ox)wR8|+jk#-=UV1$9@`sF29uOAgg>mtjYa9)D7S)A zEk3^qb$UJ`xtLVs0ks878*(6s5lunbw6SLDV0Rak{yes*u^OJRH8r;NY$*sxiUZ+; zZt>FB44>^n^+g93k#Z`7I$lnm(V%20no(S~#4yim# zEDcOMF2+{!UYNRIrL;qC4l(cG@;%N$f1NU~4vI#uMfd|Ujy=WR`;?iSN zZYe_z%K4a}Dk2V*tQp-W34bDnJl`Puq6cw)=5!iv@*`y>4t<(;=wG9gq*HE5+Mzs# z{%*OE3<3WNNOQ83nU#-IR$A{2R4NT#lw)WW8%T1U;(O*lzFKm6x8Bs6G-Ju}f(@mB ze&NO65p4gcEMO5QQ(Rv?q$thU(8P=%M0sr89A)4V$zC@f#vdSU+6vxLu_Vg^a%Lo& zF37>jY>P1PyNLUf%HCpkEP485xhDlYsvjcSiMkrpX9w#TH}bJWJ*v_aL$NQXV)R5% z{AKgk!p1b&0pQ77oBV*qTFgrG#iJv-{9(ojTX_n$+=|xpQ*Bps1(xX?fS#To_s;A} z__8q6in7y$Ju0w<(3bWmVjq7#=7!s;4nXQY8Y4B$K4YJ2S znlqowGN#p+_c6YA?}WU;x_;9S*vY}O+0%jz57^m5J-`2|alLuK{+ z%4AjqUk2E2zScAYz(t{@+pe+(1cC*+dnA>C{?7i#=ZOY{Zp)wTWR+_Ldzmn8CE_gW9IA}v0R_ncVQ`yrOH&j3ufuA2n%kXhw_&mR)F#qxNJWoP zi1EF6cuI|T2g_WB!?!1Glf%i>o%bSSH6uR4Qy$3lH++W?e`?ujDW1b7L3gy3glk(% zhJ1g_4~2ReO9CF|n+HYWqr=N{nk2fC?Y1+vSVgH0RUsx}p^UV=3{Is_Ql;Ue9e0TG zh`>LBehstk%g7)i17X;_rb3eqg6eLlH$nD;2;DyKy4OseK4kD_D3G6$6jYt{yxBFbX1SDo?Kq!49W=l>rVhgc;Vkt{M)vbH=+4 zmvje*E((ts^B&79;!-1#YLtSUSnoJ*q=C%q_Sifm93g=F;&Um{Dz-G9%hy6}^o0|o z`#pb!>;sL*jSoQs#n^qzv<23$5hLC(W-Xc7lncM3U6e((bAM#qlMVd!3p)$tn&%#(OEauIiFwCaBX}kXV_Zs zy2L|;5_p;J-~Hx`@$!%b>I)4k${@Sw)w!P3ilWs2ux52aS$XN2R0wE-DCr7Yja4|? zcCl{zqIj&b@zpQXMzzY2RCck=A9OVwH@!7qI6-X`g*7aV)T61M91y|MYO)RHM?%Ag zKiX`+&B&w-)xY69Hw6F;Ll_yIpFI@blN_GA5=zcDY6FskW~Q+>ed&7CYG$JC_0#zF z|2#EHJuo8P zS<0_+DMCA4`))3<&FEqn>~K=0XwtH)V^N`Q3j?xTB#Bqr#%f(m0a`cvh3F9mib7M4 z`pP~LrEb7;9&oD}N~0`zG1jY89g=6IKgzx0jQv#;Bo=5s!g`IhVf*Bi3It`Ab4uiNfW^j`La6~I6PMj0)yW|~{^BqFs~heT`brrpnowGuin`(_U8t1f~G_pc)Y5GjaR8`ktp&jME4 z#XEqUU-h9OrI`#EA(;A!`PVvn-Cj;qn>l`!{-0C>CtQtjoCxP}4T)19ZqgUpVhWNSTXmZ`%WIkc$KAO10qRNIZ7IfD8G zS-~iW?aHrLQtLs<^_dnE*+Qci89JEE2wy7N7+yFzSL!m=*u@e)BLIWLPCs?ASqv4% zxk%(7fEtF)O7nti>ksHtc}MT|Hy-3bvLb$V@LQNx0##)wS#nU=;ffzV$(hcZM%u4`D^3F@(Ih+~V-k#nF|&#;%Wy45C)Qo(yP= zxKRsfq(6vL=hWOEqobo>l9uWzUEdt_e+h1?5X!-6j$leaH(}82vCmUXPegwEt zrlaW5=LXsDtzIt?fPpu+e@Zd!=uvzYEodAJfeKK+zfb9_SFZI{f&q}o@Z5q#bE@nQ z(QQvuZEr~AWpw9l^T$PcTdvt9P}?!-YZ$(#O0L@2%qrq@WGy6RUu$~#DEIL$2WZsJ zs8p^Q@f)pIY<0Nj#VwI*_ZnO=b|yI9*xrQk6+$;vw`Q!!C~d|#(o0L(qA4!`(?6;m z_{TA?KiuE;bU>HG2_H+-vtWuiK~_GYiZ|=chxM218B8v-BnC`oMv|2VR)Y)S#;uPK zR~w32-OP5ght2fymLtOV+5cA{6%uf4pd0kBMv5C6S!h+M;NRW;>}gns{ZW@r`wvO8!Sh-H31?APk$vMA(--on_ z;7fHdjkNgQoYUbLbOwYmxcqjsi{go{o0F7Pu^({Ij1~Sv%1Jdfm^x&>4#qn=os{au zufhJLbv}z5y>eHTJWc(o28AeHc0?%lei*3~Yo*7Ck}I6%dzAJXgYUrHmQ(el_CXK6glo9j*j@c!J4LifzE>drSc zex8Vk+ZEsiBViFddR+pUS-?4L#YnP9mo!de-^nNyCUfqi22L>8yZH(W5e9ATLes=B zn>bJ);>_4()Cw_A$KaNpkUkD-L8A>%0k_kB%v-?wS`(2h!r>xyQNnX>Jk(y`MD~m>?2ADdnmDH3 zFbqOJpk8U*;S=%z{R#Arh3gVuA7MCPnwyvzQ!u&WDrqdp3{~KEqBM&8K28c}2O}7_ zZDx8Xxk&fP3JnN(R2>>#j5a)P)PUc|+udM5&TQO@l^t2r%bMW5`E*Y>d<|<6%ePuuobMhT*zBQqJl^(S`k_s$F$5;K% z^Z`%x#l~TgX!*VCQtHuCM2fpjNAUEH9~WN%Fnep^_fR`H#2LEWlY?)!ykNspK;?}N-B9K6NXwXC8q2fEr=3)Ro$agl z2Gz`lbu6?=X!5CBCG@Z6a9BA_WD^TLkC=Fd@*~{^-R5+vqCO#hy>_#dyFDB$D9@Bl zAu;FoA0%((Ib83`Ne0Frg7Ik8I&+eA_b%;dzO+MT0OrT12Kx7W(EuK4`!6ssa;M7E zjgGy^7IOuNt58-8uXG7vA;KZqpZwVqW`H+7LH9bvvvz5~db;nX%^vU`IL84Y2q`i> z68YfREI2erGx6`MD6m{(-(&E^dQ^c%Sf_zASW)$}i6KaKM$ms)(5k-Z^iBxN(R!5H zi^f9jMnx|!v@cczNWNtK6y$1>w3>fG(112R2a+{8?yQuH1ENk+TL!hImS7k9hUFqI zs#n_oCLUPIxp9cQ?JP`&0FGk?Ah+$PS^k3PaVb#84*-x&foicS%(q8;{xsGOV`j%F z6B?OVEt0{y@=J#~?fkxch`tE4qd6FQnplFgRa~fW_gbidamoFCk~TJb?H_z&hPmbE{v&+cAm;~+^z=_;PqG@BAX%$%NmDC930l29RxjFp`Q7@G_`G! zToVN2CCoYD*GI0Vx0_a1kc*awo*c@q4cHqflB5+D)<|;^`SuXqu@QHO&iR61CTke& z`F^d;Be4lSrKwe%SFA2REHb=gurc0bd23P^V``MS@qE{i%A z%0@KpzwqR9ZLb{$gcAr-*6deuwO2~wam}n8q`sci{Vxq<*7Kv_oJE}PsY#mKktose z1_S0sqtt0}WpP^$fqVj;!w3bn9%s!+6;b=XhdHwp}@3sVJ3^ z8Etxa*X0Y~)pA>HB*`kday~$K)rmaay26?x-~%~vZ!Mq&z9H&y?F7MGW;HzyR9nPY z0x*be@YYA?3y}wf4-Z@a18*m`WJjKBz#p?*A6reqJ~?I?PoPjHdbOtErO-nJODk|$ zG3?Ndpag}a#sem6Yogb<_1j|lG=pA?0c?`^d%I-2XB2&s^9h+=FwPZK=NDFv+E}|Q z;mKwNJ6uZ>yWA4zMKEE}0eLS&ePvA#Qggc?N&mN7AxX^P{1e6eU2|2 zZ1m3 z-dpQ7+jS@|Ewf)aU{@-#gFL}NXhoW4Hi9n8JFv$c>ds$zA`cHdg@nsqD1`198UA&d zMU{^DrALOp1#pm2sFi;aG#`(Cr8CN6S(~&Ie5oz82|7iSY1MhckZ%}>=!w)Q2|X@8 zF#IYCW35<9Q1Bq@{0FepfvN4dyu);b*1gky>Z zl@1qP2$`+IqmJNP^fb(If932qCC2B81>4TjA}Fb`UMS{u+z6gQi%7wPjqS+n!miy- zS^C97KjVK&B{VNX&gwrgf=v33)dDkM_DT+jKa`o)1S@jS`VXKW49N~|`X`6FURV}E zw*)3ETOk=lktbhU-B8u>DT0!@7cK_BEZ5mTcYAlHByuwVvDBrRK1-uTPT1_%cu}-k z#2POgZ^SDiG+0o{WG_nWZ-1lU_x#$BZ;|0QNvVG{jB78xNqTtYA~n%H!U#m=VEL}j z_%#n(AT&JNe$D0Y7_TkOG=i&ZkN{;kq|gp>e9pVJEg18c?V}gc5qV#1 zm{u*)SV9(7(n=Hw>0y0TT8TYI5LH4dg=Mh!B0{CiVkV|Q!cdifb8Y^8+mQl=zTn*N z5jOI$eyE=mV+KYoX#6X@L34O(+A-*-Pn4ftZEIxO$IVk~OZKQ$^3wky-k%rGJ3W7{ zdEIJsA39u{zmuMXs-B52D|jtEv#iin+u(QQwgNGWQrMk2^3pj(f5CN ztp0DbJ*8q4AhHD1b0IHqU=_oe)GqS>Qtj*KDH(D9#o7m*noq)@@)-}}yLkO29eEQm z&)ig-JHjX0tYsZNmt-d9U4kE0;Eh6R==+QJdNNvf4DCC-QU~P^X?KuiTuE*izn+O} zUN?^L1$hSZPqjB}Q$0BnCY*DuT39IyGWtH(;n=8cNExmh)i!>j#M?nTQOIZMquyU|@${_v z{az-(;`2DIA&T>&AW<@_YxjFWP5N-ejv@4x;@5kG)g+n#(4tQE`m0i20uU|?z`*-& zwoC3>oB#xcn7ulqi0|=FKGAbebf2A22pO@T$<>n1%(wpwIFNDBrWiBDQv zE>i)I(?rV8F#;n#HsV_Y4|$*8QZXZX-g8FOGD^N}NuM4pT@2d!ATQdC{b1*cS{$jP zjJ^wGo%%h0Z%P^A8J>iVvuDNYRw?_2w+NR7&&PEVGL@pm%ml-a|5@yEbQa?a0n?LSmt5F*>6aJMlT z8e2-Tp8J5K8?*N=ZBiboF$HTDw0Jy^VtRW*_Q9AItzs;f-HT#hV)zTIzxA&Z;Fh*h zzso`bInn)*`(WTul)vR)n->~AN}(v98Edv>agj^ZS(e(?P8yY1V}t7jW{jx@FZN82j}x1*ypp)aVR)$ zCbc}fVb5ADw|E;bd@Q&_9E?HU7OCcHL^>)GzIFeuqpsw#S);<5dW$NHM->*9+;3hx z^0smZ(Vf+yOw+8euU{_H6104Yd1@k_BVE}PSwr9hhu=yPy|IErViAaE{>&ECB|PHX za{?GpVC!42Yma!}J3SU?7mv9* z6cAp@vdvl!vmklxKHX0VwaY4)q#h!c8Cjk^SMnP~pk5aDz<` zQ`28TOsOZ|N7R(%KplHb< zslYMp{_>dyzIX{xk@{r46D&yK?#iuBucTB;O*#A{zeV~}ShHRiq{WWzugpZfDDm4&a);5JcyUfb zTWs||PsvSj<3bsTq9o zDf_$`H4&bFaw?XiH2DoAdW#cs!T~BEtD1ffnDf?YpJTeT_wl=NEPI>0iq<*%+WQ~M z5A2BWwT(mn^f&Q=O9F^lXx&rqS6uN%N)h5JvNM@&Z$gK=p z$@jYlsq~2AGc*TBT`-vYNdt_sju+r*6+cl_-sB!`Xf)B*zoQ>@xwdoEZe?KVZ5<1r zrndVaQ9Nh21^b%pwgVYH26oZzoA2l|+18BP_`aApnCr#VlHrVn-7V`CugO*XJg`I! zl34#%!9KT+hJI*mc#UXEAx$g+!1^Pzcc@}v)w6Pgu_Y$k$Hbn$8E=O< zOT@@U>@>qtcfkMN@hVGj?etF_LGhiJ@gNG{1`MY9BdfZ1V3Qr{lDp?>s4+AbFmZZa z%Q~Gm<8~vTRgIbD_{M-fDE_bmQ)slOuip>{7qVI&h*Wc$N3gVkqwn2Z2cov3F=iBt7s zqOQ-LU$FW;GlDtl`|#5`g^01{7|1Co5EP@uM{WdD<{y>qS77-#e7S0E3B;9V9o!eO zr!*FT(Jv*aUZcFpx{)LDV-s7>OtDvsPl0aK{vCU0b5$bB`fkMhPsW5s+e^x|gi2;+ORsbkkA z#+K>LDS}$a6Q2hdfk_0rqI+CwE2-4qH8%gljgPO|6+7plsvUbj=DOZrBL9X- zqx#}%mMj8;>nzmXa{9GQ{P-zVZP&{ z9~V^UQlc{F8jj0U1PC0&YI28O9Joh_Zrbk=E{%BYe@9*wo4D~dZr`}o;%Dqa<7%Cx z$IwgWeMiw(Qt`Mn!%e>t4TB94;$kcD-V6j{7-{V3!e1Hwnc~tDax_pHMJ-1`&gP7d zXs;~|o80FX58foJH{lk{8@=KO`o7Rxe*5*dGnuqu%P`MELT#bk=eoV!9At~i0 z{sLiIZ@2PgkzxwVQ$1i@(a2`~h+9p&Eq!fF+hSA;W(vr@Kfs_xENQ)EevOSFb5)>( z@QwK2{I$t5{d@)J+jA~gJF5kIcoM!CY)F%9cR0nsKPx)Tab|g{A_1_q;FiUfqUxz@cQ%ZQWs;9#e^p)NC-h&8w)A9Z96S)OmT195FKMMu z?R)*G!pinD$NqH&%7Y&q4nt@R5!|N+xKl*2Tvs*5F+m)e`Gwc`wTTJ7vSLWMA-OLw zTE2|}bGm(NEPag(e!JiLxOEIzg@A0fg;YpdzUeCo8H^)uhKJm4#WgT6^5NETNKwS4 zpbKaDg>qCnn*4{{|3{Z^LY#(W1Qac~<6)blqnzVoB-~i2Z}&z@Pjdq$8{l?l6aNs3Ahyr+RR zeQPm+Lvb%2TBH4k93<*6a(ZN5oLxB*24VA>MdB$giR`;w#f#?LtX%mKsE!72(2EHBxAk9(C8?z#14+tU!07yH$xds|;x;Hf(7 z(s<7DRTaoN*p4mt>159`?BuhIQ>~y(^|bSGy^qK{RIED`%%f_A5^^D>#hzywMAU~E ze#T~D5Wf10Kaf=!UBo#*8B@!6=hKHSBqrZNEPU(+mga76;ZMBqW#~~DW?~wc#Qn-o zD&7SuszxEv^tfYbL+ktzF~1CF>Ia;{&ti+WGwy9A@I@EN?(wWs20W?3u?ktA$=PhM z!w#S9#d71D77lcLzZE9$$+E&7H`!SGAo-!%V_Du(6mw<>c`+}aT77X0rBuiYk!j{u zw*#l-;J~6$I20scS!v0@!k0H{eQVU{Q+QjED0MkX>JLax97c}_6`I>)bq!9V-W`6(C~8K2D;Yp`nyA8^A++vWLe z>hBh#nI#6gPMXdI^R;co+|+gTGL~Xq7&D!qUa|#~YY)=AR15H}{2(fg%~X7VoW- z98IIB-j6enB}iXj8m9CCI=V`|FpuuL!9u#j1^c+9H7S;<3;1_M);6}EnB4PGKE@?& z1WzUoiJ&&YJq?@S2D8V09zxzidAtIaYheV51-RZPU)Qkly=<$;G7L~BBrY@d zl$QUYK*E&F8<42D|9`3Z)89kX;?LZt6BKkm2RVT#VMKX30nzwWlxQyX0l5f-OII!~ z_8Im3zsP)vwyC<)GW#v+RN=d#Sv#BgHYVEH%abW0wH`cqLMHl%LK@#dnP&btyS}IS zQDK{c-E70=DPQ~m53fYD!tcU0E*d0LPW<`XpN`|z{*b%mYS(JBy`9NwvPH{r^o2Ck zoZ+40pPg47G2AoU>p4qg4?;A_n$|Bg;^<_y }Mrm#Bq5g)Zx|H21h8>Coq zKn+5Q2MPbYQH+r4)IwSY#hJ0Ut1hCt=6Sm>gx~J(VX2ihOAG&Q^ddW?M*#vWW5<@C zFv57_2U?5e0m`K)PMS@ySAc3dzwV4>+wit>4V;v`2cOGnW)%v)T^S6EJOhic8n(4z zuk*idjb}H&EUaNe$r3=5&{Hs3#RTyf#pLBKu2vHHP+n?X-`TJeFCY_X@`VGSOGl#w z;Lc2LfPNRYP^g5}pGK?$eYa**Jws#$L6Y^^w~?aeCyw}_zM3L6^JA}ro(Y*rlYdeB z%S3j}AW?Z3be8Cm+YR!u<*ta&n;@(O6ScqT#~UQx5366s zm!Evn@91&h7nOteM#Qtu;P3!akS+8*FEZ4A=D#3g1`3VLU)`^ z!nV2ikejs9m@Rs7dcT$Y0u87Zzr(ZQC{z$4sR~{!PxgIZMs-wN>+F;}{_Wils#~oa zY_^AddIYl&lp_gJ{cmHw-fcs{GFyHGhbovpkC3EoczjP~YUg)PIze4d6v0;=Ieo>r z@9mX1tRINxExrW?nr%4m@|L~IrGoWs{uUfx)we}eHp1|_d{^Hq)YK(rJ#d2}_)`*N zeG3&yYAS@d{bh5)tzm`b49!zR9z7G|;p3}q3y3dMEmH|&2u#bygvJ6`iBH_S(HYNTLjk9{rt7x}s115u|_h<0g_kmyKP zO#13uAWN@Afp8)|KT(#noJ5-a<|kjad;3U&qj${4s-D_o#X9!&ikf4ax#QRTZfy7e zfcA|ry^DNs#kME7FrBZ1{6g!}T736MEM2*L1M9Q&;nd12Fz;~B(j;e+PxUZ`qT-V{ z(zs3+vy&j4Kxa35AGDlCrwQW#TmLkWDToXfwpo#f-<*)CWLA zo_DwCj8j}+Pu&>CSC&4U(}6jVjJ&6A1S(<}H0DW2m9I}qzFdtm+(arllu1FIZlZi_ z6?w{9OGap~8&PJux_&<421TI;BRp5Y(mtP5oqv~XHeAQxi(=xUM|@^v(5oBU`uGw& z54|o~|GJxQc7Ifn&#T+_;PI-S>0i`l8v#osue5keFL6@ei`LS2k@t=`j9|5%!un&F zjoAKq%4+^*xa8Bfy8-DXW|zfDqu$v+yu z^4WH?%W%n}6S+iGR$zE@Z#P1Wa?Lv~I?|grv?i3+Q8ppTK5J3sKa$giqXpbW9j*2E3+iySZ*?` zD90mS?iY!XzS7C?)w2HDgnh3lxEwkT$L=eOWCwu3Mke=34fJVhM4f)D*1|Hg4-PBU zl0AYt(5PUE8LX!TQxi87t}9>`baBAy7PAOa7iH7I z$ej;KIwAO+?&o&jQfANZ$mH}78XR#XP=UL-{4}c@iF1B(UgYf#I@+6 zp5naZ+^E)6G-tVnXsiL_sE>y_ijR577(eUMiPf3zs z^u6TW0ncUOpJh1coc6B{dxKGkIJKUsA1S~C zPB|8XZ|xho7L#+Zv;S|X5t!y zg_@fuB&8aa>tEU&Nd)4=w|ffhr)ImqV%5SoDfUQR$P1sR7vCTV&Pm#-ec{Io%xM2vtwNC0cr zZ7t>($4#J1k1HJ{GaU02a>@*DAPYX^L(uXWR7%}99qa;Vb;IEKh@zJQJ)E%kOsX`8 z-Vz(%Q#EBgm_YZ{zYCFKptdSU-b|ODq=NbGDglr05NznFBC$`Xh-X!^1DpdJll=@4 zm!v|prhcxd&y|ZgL|EYulz9YG=^I_ySHrZ#io{L^l@-?D887pYHSim|Tbs{w9mX8)aKK10N&Wcyrl$M6)PoTQndNOIj7-t@PH0G&DPU+c6Ac?z!##-sNs{j+0QQLmwJGbFQ zSHvRy?Dn8m6N{N)Od)$gMhUhMJg- z_)AvfqAWUf(g&IjTMn&iI7yW=JrqsBtNiC5TzACQJ4D3H({REI&0MJ$*ZMSg8X!+L4w|Jt*+k?%dz`- zZ1#pjSG(3dOLqA@(~!2*Dj~%_{ajS>DEZb0je*kblys|$fUZ{64r7Y&lR|XYFEC;d z+mroW$k7p|1=P3OE6AGQ8u(VIOw{sfls=O8%80vZ$zwq{A+r9%6Wc(VLL;{k8y<$R zU^YlF`?)t^vgsX`9k`0J)#?gmf98ywgNvNUN00fhpzPOtj3p=UI;9bJ!S3q`b%&1M5`9p6vi( zU(Cv8Xt{`O9T;}PMiMs0j+Kp_v1PD8@}l<>JXI`}c+9+! z(9aT89E>H}&cJMiqMc;V5Gsr=YPlvpj6Ke@b7z1X3ZgiH!GtB5fdQY|ClRB^&g#O3 zuIP)rDGhOo%**r$O1{LOdjlmxzESx21Rqw~FH$iHI0`4(VXGAKzqgg_;Px3>nBo|_ z>8Mu$`giO7StFwp;Hri@E)>sMQpPWIN0l(V*Gbc-Cr{fj%vE+3U8(Pv+>Q-2+=Xhq zrh~~CZ_-q!E;Vwnf4y6@MxRjPW?{_8Nb~o@+c`K^Fy#-pqoK6!D1TUyRSznSByS#Vi;s*}9k{vz*A5QM@$#4G0!UQFjmRqSAtd zKkR*OTG8-Gf^MWZV-(v?w^sYSp3^_Rdnj4MW>FYOJxP#Io*FL0s`F*eD+#vv?TW0K z+D;%+1`yKJ8wI7!2zs6G*U+kM({dy@%%)SS-PRtRa1=@g0g-72_7VRK#J=3nTC1nh z12*-0D=8&;^>&A~v&f990%^I0hwdT#pK^6Ipk+s*oa{*zC~(=8amvA6pU4wYmAnkf z0la_q(S{sfPFEPq1sFyE9QDjw^&u>VFtq?1Tq|j1kK*Fm;w@f`JJ`kO2u=?oapYih z@84}7;_uYO&r56^U96}u%;0r$p9K7(TsR;Y^SU<&8`~P*w&1l1iRX=AbbJ;Sg~nrE zfrvY;ytVl^e*?x%EW<(DDsXu^Tmi$%Pj(_HNoRa2)_3GwWVSVJO%aQgdIDAbk_PVM z#Ec8LS>He10<<>|hc*s^2d2(N=+{41u#F$6A|*$?xi*v!iwOpLG4)g3}vV9fR|9=s8! zV2&yt`!#&0ybGbN#?;nO^2p<5-S!?=&KLD3#+CWEOZBlZuH!auLEYhl09gg95qw1f zUa~X2_dKUgV-tpW4h5qkbh~lv9>7N2zJn7Sb2iKTDvo-cqEfdMC~S6ZwF#>@bmcF2 z38@qFOULMpm1`fd|CVfQ{iTO=i&d#O&sVIzPL>B(Hmn{HXoH%cG#;kGE@nvR?N=Ee zXtQRM+}G+_HCQm~h6`yL&BQfXDE^4~l^{zOx4%Qd&ttX9;GxSd+ZIc&tQL91rOtw^1)Tpb$Xw9Fsk$1ryqLk=7a zLb|PbwNAYWD1_;{O?ST?s6L1pU5ON&u~d~~d$&|!q+Vks`x=v}=4foTs7Buf7q$1& z{Hh}m_%o~s8Y!Ce{~^MSe#aL!5FBC;75fK>ujdWuXYG&9`Td+4itvl$4rJyMLUKx& zA>v)Kae!_!QeLlM@$fC z0nTFSzVuPCJ=YsDe`%beHJAF4I17xyr*msBK{Ym49I=l9tW%f%X zjybs7hFC(g@W~SkyyWCRH}}5*qL03M{Ksoon;uFaRH2LwVc2x(CaMuPhe| zYlo1Z?dzo&TDC-d`K(PRVG*IHJP(=)yt^@L)rR1RL^v#n`ZiHRYBA71mY2*51KA&9 zH^X*I&W^NHpH~vJU>*sW7(`eM^~b&JucC8kh$-tY+`6`0xlk#qVI+YG*wYjrbc^`J z@1REWFlKss%x*(1-JjP=QtgqY9k)K0)4I(6n|(vw$7wQ_O|^Id>4z@aZ-Pl5dc?mn zE<$KPrIxg~%lA(uK}j+1NJc^1su5N()Jail_eL){Q}kUUC|kBDq)?F3@KSp+2qdv| z;yp+fqS*X+k=nI%V3Iwa8p0dn)cvLlH zgnn}lijRoZA{XF-f|(JDjWmZM9v?vc$BJbJx9; zG=+5{YzVr8la1YMEIIqm*zIW-SmQSRua?2&;%X)a0D$(T?bK}rl7LuDzLqE%=lhGK z(xnTogw4>gi14oni-$4sv$DXN1t%gNg0ZOXRfM+euQRIauqJr4UL zyKwfEP;64Yo=FD9{-|sSPKf`$3a;mv7@@MF)2Bz>+pI0LA(N?lHw)bOpK+OOO3IaT zgZ$t5+DzYjS5E(IEo|E`%*;E)ciR-ayV&VpCi zcV29=g<3pH-Lrfr@=`-HmZc6Nk(omDi0|jkR9j3Li}TZT9`dXZ=+qb3c(?p2%2qMW z2t|4pM-f=<;7yjTH^q>zxM$Z=29S%$-h6M7?1y*8qRiKAWbTwnpfDl0--3IkN(2;b zTVpNK&3^Pqf^?cK1}Y-Cwb4RLHRxT#`ELl6ueG|9@uDkT04{IvZeAaF^1~N`N?9PD3%dW;or_hfLSTg#vH6PWe z0CI&?LSp5F)z27Xa!UF6xNvd$i@^v9nu45aDvDw;lFSe8gX_~Zh4509tW7oJG)8Y# z6@7()=pm{-AwwIFClWa^ygr0ojH~ zfJ-j224i9}9}j1aN>_+epBnb>QZ$SPQsw17kayfXEJ!-57h((18bj#X&vEeI=%WS5bO0@tXG)#DIK>_ zQCdG3BNfw$h#YON^+z&1vN;PIIpx{Di$hzmZjRX{``4TAdVMVyzIy!5K}DUS>K-yi z0q^D#_V%0ywGT8zed~jHG9Pi7J4>`Jk>b){)!$z*y^cJ zb_nT}SoBIN2%>xOTkrWIrHvPvz@`^!Esd+}@bKZCO&}c#ci#d`0QMQ7_%(ZCPe|3A zx+gL{Ywc?C4kn~wz$}SX9gLKNm1FKt%}VOU{~=a$jEsjcQ>pg24b=0WaSknzPUlzF zFE`rJ&q!*nDX?;UCIA;2{%)DiUoCzYQG#*Db7DW9@QG_D#9+mB+sCA*$vH_WQdRiN z<4~J?%I;$?#vY;_ENt8q8p2(s7D!{9ViP&ETU)sF75FhtE!lZCw6ZG+z6c2}<7XUZHJIZRKj8JB$H#*M z7YP1-dY|+k8cKD*MFc5X3OZ*D_z%08!RaON_4&nC#H2uF;UMGsh+->3lH4qrA48X@ z-AJdfT;ru&zKUhg8VWO>AqDjuvFqGBlzibj_z?;FUMe|d!l-fAb-AocfRMdABV^a2 zENAub`g|%AvVhYyJme)Hve{RQ;zb_wY|i10T|B4C8|k8z!U?x}#Mc_F+=6ALsTk|I zm#fdFD6`N(Xlgb`fFy30U&yB)h*~=ZW^|d1|W0f|_ zW?R+UhC>6VYw~IoANpAp9nYBxo2|7|a6i4kbTq$v3vq>8p*@3`S2H&FATdzb%=x{Z z>6@bRc>ozUhbna3A};&cJ~%{x*n%@uxTr3~ffJWYn!+imjH(oW(Fcrk5*FNzQniuNnsy6_%5t4{6>?GKUfYM_}@o>8x7j;91_Bh(J5_vc~9 zbPRf*?@M}6-Jd8`v&MlJ(@g*{yse_N-1YViKHg2?cJ>8k)wi^Q^*539=NG5XFd#he zlS}=F=ZYglLOaHvVD4p{-y1^>8ZVNwG+#|j%}AgPOwxIwkD_$g$tLaKIz(?(VV0Jo zdyM=sL$1@~=1Fao*bDVSu|?0$wMah2z)n4uDK7U;)kZ|buy!NA^KRtnTl+f@&5Oup z(Ja%4fpV? zl}{J$I;sX$diKcwt~@{f6XA^zqZ)#KxEejeyQ&2$oh*F#@;rE$DkK?xhf7(TMQ0Jl}dEv z5NB7Pj${xXVd#nEo)CR&`qhe>ba#hERTp-J$)mBhmE_+r?s}M&uLRySO6)p%#$Pgs z82h3(3U|Tli4t8wQm{rr>;6Lt@i+C_@8|IhEnA6Y#%Xnym-iRBIFFfgjPut&6s zmQ#jmNUBL9e!qC=8fRv@KLLnVJuFxCHMNzx$S~O2LfU4Cb8%q(SL=mmbrOxDIk6%Y zJ>*FH+LMrh2#G(<#A`$CNn}tDe=;1^s`qRUMb%dn#fQbM3mAmKqE=PtZAB~7*6PiG znc=Fg1=lNH!v3Ov6-@VEe*x^Np~UR76(+i}2`R=#Ct!|!mS`rsnlLOvHc+yXqa=!& zh#kA)wWpR7+s2eTo9M8T&^vaVh77X12nizY9OXoh4+0}@F{HEGD!38l1kud&j`CvR zM-DknJ9bmpn^LokgfMiY$!e2C7v~c$HYZM(k_iP%WikxEFT1rKg@v*b_!E-M7Pm%M z#;2hdQU3jenap-c{XG8#%GmjW`pU8Lz1Tila~8$Ig_{Xnjk?7_YxWC?AIxR)4_7&a zsl(*e{gJF{R5LlZ*1S%Aq`|XGb^xRDtgzHENJiZt9-S)v3TYJM;gbWvt2ELZ4o=if z;yfEO%Rn&BER85x1u#L?vV(EY4l7vtKPBT7Dn49Ck30-679NZI#m{EOoQQR!s5cF z?DGej(uXc9HtqJ1I3%Cfvvt_8gLD{g=OkvcrB!FMxbX~>zRvUR9nkA<&=I1RCs<8* z4BIfu!EkUhV~qJ8d)Agi1(AkXI~*J3;GvA!)Sl14KA?WpvCdveuKGhI}j1IkV9jw?TmM>hUBLA(9#{pf6{;I2kv z7F>+%x4!p$-Gv+lJCXNeKSGKg_3iUp-$e|&6u_kt;alHt9Vb@KbHD7_BTDv8!Mh8GfgV8W?eqSKw6oVZA~^g)Ur&IWt#n&D-fJR{~M^`XLlsx+~`U26N2Up z{otBISeFK(hFM0^+Bj=2i|!pDN|^|o&7{5}75`$kV)Z?u`0eu{PlT3LD`!4injCOi z`Dd=|{QU=GDbg8|&%B+EB+2H{A;<3|!d&KrxfOTR49wan(Z)Q5d2O10FZ}V*w6%=~ zO9iwXzUi4%DRkPLauHp|WF+ZhWZ3(%tgMA<3jnl`-&V$ftUepbV<*Gbifh)30{6JV z3F|NW$pgg7Eoq8VI!`Q939o*)V|&Q2mT6>i!&B>vNBEGN^InE)9lnUP zPE~8X;ju>PcllOb7~%4?-=Mlp;)PC-)eOz^#PecxMYc%*#zolJ)0Ue-kYEZwKY2=g zx^9=|d%{&avYVhr63K5iFz0TMB@KWa)(=km3L-=6J0bn-)o0szF)%v1b6};7Z-Od~ zhRzaxZCn(N8>B@$G9B9i`1E0hysgZJw+OmS74tp2kT0Ygz8 z{?i#_vG9=(Ts?9ctN@FAqJPXk>1_Cb@&{HD1bfKcuUw zgE%ahs18e=OypCNw~nu zU1)lKKU9cT`ZRuAW1}DWW0WkbiVif*=_chWfX`^pRpgd3Rsf?aLz11c+l}~5@QQO4 zQ)z!bJ1hju>+m}ME05?CtV9#2&qOTMqIb}g%1XQ0W;#&$C42X#oJC6*GNtvR#k6yBYnIi;23Su zWw8|4YS9F<7yfCcr2no65D-Lifmw)~*g|O)>U>K-F>9vjdHA0F)~LW5n7$(Wx9@KW zAJpOb93}$a@rf!AjJFYmO0aHZAW9G!jR1wv92csypSQ$eQjY~EsDm;T?{%;+;YB9_ zU{;7T!CL(7(k5(}6wwe9{+N?DB5_BRnT?JobI6=vVsgsZj8c##yS1o&>%A#2puV&v z>88Gqp&@_(I@c}HZXV*}B)`0CHk(8X$sLi+_HukCOu*J?!S$Rz~Er} z=9yC+h|p;-Z#t2{+e&c2hwi%g|23}vx%~HEAne~@D8vd5m}M{%6>NY5XZS6O5uwY9 zAlB#7%Zk93S;p$!cuPZ%;Oh(M6z9_s5+NC697&o}i|o_>J8g`FpV#x2YA*FM%X=K` zldk}`1H1d2>PRI)t=AV6gFxWbUg4pn-2Pf%Alpg*ioo}|7-4=G7)EP6PG zxoJ2y|A_3?)>7a|m=s6jMp6LuOd+eSfPN!;IaX&1VFau%E+9y&=L-RUCxT~gy2+i*6qMtB#fnw3S&3z1 zq|De)WKSqOT;)a8wdI2xd3|DA@~t$ zBM=kS$Wp^{3X-b30802WQKSyNP~COg5bVl@xkHN_xT>c-ApU z8Y+eK>P5=L=`dHcYE@*TWP2FgW#N?;sLUgBB}%x3jt^z|Th=PK?(ubyV~{Z^7>^vx z>6wC*I4`ah!EABkyC;PlbEX=H@o8o)OWqluKBsHhsI9!=V^py{ib`p?)0gy5t|fQI zW>&)fzYw7=k+~R7LD5bd-&Oo@`=UPy3j5z*T>v-|Kv#>(h^pvo!iV64}{j=7}sBkY-ZW9G2JW-GB9Q)$Rf&^48)?B zMV_-sEJmTyPuJnL!s={QL;e2#dP7Qd;Gpc&f@$=cLO}nnMZqvo(R0Mx=2T|Wh?yQl z&m-Im-Rc>8w(e2}DHYr~=y1?LN?w)LZw<9lOhiGiq!E!11bp8KD;?e9z4 z+3&x5aj?oMAaKwcedi?tftZKnqQ<9E7M|b}W+u>P=@1-I{|(*$ecJuWo&pN2i?tCt z&*c6LQRLNCDoGKcW!dm5R!@-fO2};x?T*dlw9nOi?tEZjm~9fF7XT~Vso3ojz(~ca zHnY|GTH``*E2m`f+DTp$kj*a_SiAN9rCY)$(I-(_*C**C_Q%4)Kz#938&D=$l?#MQm=|@5C(0`?@GUiP}Q8Qu-7RWib`hUp>{BM>;>)cf7P`JV%*S z@FaL@xc5Des7q*m)n9p8f12EUTz%RMJyy=`X-)&J?-|h^pI`c;i#XEZn{TiL_=UL_ zq%_#HD0!;a9FtyB3X}M?zc>5GE&tQ={gp;k+%nODyQaokF=)D>agkDoSyjN}E%eZ!sPPivXTUaA<=&vd2oL} zu!KAz#OD)#kSc(?ymrx@Q4V>V#dDoZOp^o8c_#GuX=mq7NTEneP|5%*bc*w&y}Fvl z?u`S#2&Ouhn~^4AFT^wa+>mt1TE}HVg@qA)tjf5zz_-PkCo}q(^9J|?z8Q~w2l~?x zK>zbjmQ0OHf0F?1GMJ3W!;XZx|KUVq6m2;CiM}@ACrFdYyR4-NHX&&vS>3Ip#{075 zGMq!5;#EU{GXi`JzD#+#eQPEpEW=DrS^`y@Y>tGl^9 z!oRi;1Ot#|dZj$;b+u7VWgLyg{HF?<7zyc2Bo>VjOx|XG*oy|S0t*9Fcx=NkNi>n! zHNT7wWqHOpc^R2`$)$Oni-^Gei>wP-u+hmrx>lxzDXyniakFYCt}LR67pY9SlO)%9 zHAI`_l-FxA%j~@{+3Y^9Oih2sVfEsgsHlS@sw%I9eqBL9d-FCF3iGLwCMojX!zj@x z%?uDWojIAuJ5g%?X1z|jY&e0N%M~*iv*Rhh$xVvqadYY2^@M#_Hf%>;)(-}*5c|+@`FJ5!$kvhkuX64!A%dW@6qXw^zI&+| zyw$};Y4-`Q0`06BU{Q9~j_~`k86kRjAanA~V}g#IR$OwkUHcU>Eaf<;KOW zC}GJLVPXJ@x&FLunIkYy9aO7xYH@wX-k0knmGAXR$M>Re>(sPCUL5K1nE9}p6YqXw zkzal$YjOAPD*5dQQx4q?ESpYJ`5FXR;^D$U2zEx?4@-6B`DG=+kR0~x5 z0%%E;^9GX7*aRx2#Sdl6N;;Kfi#u z9x6Z&c{WLJG2EVxKK%p-+`(;sY5sA`<;=`^1PNm0tK4lUsA25`LWbwqIhH&m@Yaqz zFi)qj;(4RlaIaOm1!|}cmP1qRQNR;m=le(@4=r_J9N71u&cKz%ZdZR!>}`?XnA5y( z^ahNfWdZ6kylb?yq~09ftdB@2Q|UO^%{{d4CJ+ndp1)x04aLY^f7!?He%8G$ z4(*Ur_Q+d^EIMG}V)=N2^q+ALKxAZMSm(W~d(#c>1v48Dw?5|NZ_LS+z|-4EzuL9( z<mWrO|pI!ZEctl}ihmdjD_44lq6 z4a5ekrn{n+Z;WB)s0c9#BbHYT;?VG)f&lEPq@F{tR|~ z%z%Qo`)_U&zZV5Cqkqgya8sp5w}QX5t0&Mi!?RYJ*l1exZrsb7{!3lX}y<%Qc*~h?l~|eQ?|YIn3AArP<9vUX_Q6 zCirLM%MRJC_bY$^P28d5?y3#_m3|?)&xYE?_wNPjlI(Gv!qj4& z&{(YFDYY$~n7?qg_AcpeJ=}vyz-)DEO*ZsGeIe|4yrXr2=HhZ`0UkEFD8=M{itOi4 z1K#B*!LW?t8hTUK`0VEwt(S;tpuxzU7?RhN`N7rkRI(b~5IyVdBG6{F?a(NN)TLJW z?pNM;3ANaW>xNKCA7OGjLH>*-DC#a-I4~&otbO^TN}1NQdIt{>x8i7UFWJ%k>Ct5_ z5f7-Ap2GM`f%u2r-cvX+_T#T9=Wj81@SbRYL=ytSb5(R|W4dEaaYfhUO{Tk|PKVHQ zsPt>*ZF3wtuV<;dtYlvj3YNm>m~>I(W-xm3XyNSPy!fy{p6?*XV6n|%W5si&Fq*L$ z5BX=d8qp!I<7Eoug7TaC+7IKF2fU5d=}mLqIpVYJ1n0MuOmc;7c@6{dEQ9E5-Ezb{ zNiyy!t8F?2ZTaMOEg-^@D?V}l(Pki4|!{#_79b?eOO zG;f#~hQvayid7v6Eo0cK>V;K4|B148?cTiI)XF|giL}%11-ZQblctVCC{K<>0reR( zwOR=<=3b7&@{XO#*f7wGvh?r>Qa{{i9lz{=?<>#i}I?DXb$}MJxYS@n|g6 z{kx&;TqaLg^4^{8&Vw9C_;08)N@1%dLYs0KVXaVc)|ou&L@T0A0lWPR?< zJ4C;Qi0)>E(zo|w=eb~m$0ekU64orTnwq7Ue+vKA)IgiHfrN0Ypk$bLgva}N)y8sc z>#NmH)p&f~%uC-$6^%}fxn;DG;TCVOUzmu9d<4n%@M$%SJm75iQ`1I?J#VqR^2@*# z%w3o5=-A;GqlBA`3Zu27h77~KsmZ+pkK4o>)kUusrHeTv2l^F#dxN1x3Cav`%@VU# zkDX&J3n8HuiH?P08J7uZOS`V}I?_(sWm6VG9Qvy?M%~OnhiNUoAiG;AKU1eU>0UO2+_^X|eKiB|cuu^WmT%74HD< zJ@9CG6tx}e=T~colqTrX@Q4sCbcPg}(G^kn3|{g>?v_F+wEHuzK*Q*ICXa+nCAq2z zAnhB!e4_dW1kV$6LW$%i^IWysvLkA^ODD#${ zR#8F@RM^mI)NpRaPOFZWueQ?&wYLC@3*o3;b1Nv?CT-Y{(%`o%gArKu6+f#biTJZzy@q(caTliFL^w7+3W*Gx5ht`I2f!Bxp{fmt4lSIqFN6$cabm%)QJ zk^-fLEC8Q5tBu(Sq zACwEn?xQHX7` zU!1;%l**UrDK8~Ty2&~4EteLn)}QKJsnfnXe%(~E+iO_dl1?(?jqX_lZt3v({%0{# z&}E-Jrhj!2iI7C+Z_q1AxwKFaX=CuEeX}oxO%hBhv;Z&9%3FryrsC(Vpn7+O38bl;;1-WC>dSZ*5-Wx zXnK~%h8yS-X?~w%NCoLHo$?5dx7!)0ZfuLbbOm_%aZ~5{Y%6n3CD9<)#XElu-fNE{ z>6gC_T{MUu&q?Iz+dwPT?lGi3Q4Ss&57sA6N|x75+#fbB9?}P|L#yK|1n@U%MhWxg z!>39I{$cz#c{^%y8q)h*pL*2P>?4Xk`?^haHBuZlO?wr#HqIrC+Q-?54bM%%>#t4a zu5}2Hx<47%(XKgZa3zm$_c?U)g(R-9%&|J}1^~wS>O(m~vn&mAjLph!aPSWoL67!( zk1eWfMNR@%7^Vd~$=Fw=j}j-vML7j8N;a4I$en zC&$3%jH89aK4l&s?LF(&DlN-CQ)kU@vHUgE6k8YCRZW}M(zy#FI253o`YQH*EtmZF zCgyC3jP>V>l$_|KrjMaF%pp(2Ei9<-!Q=GWrUf=$Ptum6yQ@weYigX$xHD-2JGUL? zPNx+rJWf(6+~~~crVwYc%Or1xSqyEmGyIhw0=qJvt8AfSlqp3qj-Ep{K1)trEF%`i zc?yiJsv$1YOQf|C1ZQj2Vq2)u49AfKHYKy${4&yI=M(J|j0TlsB@NPvd$<-P^wr~V zLHg5?Fs^V~d1jeDTdwLdjc+6Zo{5Lz5ERDQv9Joci33IjL2m`I)W@^!M)!KjNGiwU zOsshFZ*3FZPPpTci>@8pZDN7`2vc-kWjW|7Yty-FeCS#{FUUV6SBTewQ!bty47J%q z6Uum3t|D@bC8*+}PT_g4@yc_G;^D5B5p0UmIh=RvjXXS&ycQD)8u;hj_dDxAd21>q zpT5?8T_zn~h4st>UGB`iA7pKr0fIwcuBR&$R_!(XVDxLcVob{~-0-{A%Iuk-WLdC%&evzI|#Qrh14A^shN)}e%%(8~C=bA_sO_4Iv=l@ES? zqiXCgVfbgKJmC+=%A`GwF-f!XsR@SFQ5_jA5v}j%tgSSsxtW;x<+ymm{c0ENV7b~# z6R!q!4d4?hW2gMTeJ{s3ikx!g$13Mi&~LL0OXBvc>OKDXZf`NpB{z1bAOYwN_+>LX zon*pPnSR5D8tdt)<)Xl0Zlt{6q#qHYKa%s9pSzt#cc3cm|M)>n`#IOT2-Sm`{qK%>cQ|S>) ziJ$Sfr*-ktixs{`T7V3rSG1vDy4$3F*g-KV{({Cz{&|K{Me(5R$&LItFQ2>@wAo?7(%X2c8SH@KiO&lCK^?|0(&xB2>PqI!{jH5`+jtkk z4V2wq$ad*fi*I~Ly@Ph)xOwyo-6p1%P0DKfaJ9ChY_oi`SR@uqSGo5YzUen zv5H=-dKAVZroD;>;m1pEmmJ0s2nt%bI`aFzsB;!0VQd|Ufme4Oz~(sLD+%FmKMvx` zNG!v{SyS__IL!&Dv%n<|0VMS*dUZNV|IyH1qJ?uqr>^kT#zAJ{{PD~?mR!nZg?|m? zyrz+~SyEFZRc3lH|1(CBfxe2S+X78vpfL^I6qO&zk%hhP3F3e{Od0v?p}8^@Zdrle zYf}RBesPDY#DHq&gM9#99jdb8~%} zS^L+vP)xZ^g4t-^P1D1+o*5f>)gGA zhU&Kk3?e`(w$~25SxY_Ukk*+k4Du(QuET(rr zv(p_LUQkruzFzwGHzn$)GanyfF`|e6ya-@IE2Bg5#O1grB`u#)oGSt=@N8+41T8Y;{Y!g>9j&oB!2plsXg8&{(3p_fK z8K6EB@4mBOP`T_Y%6?Hnal%w#W>mVh*kcb&N>P6i@U)UpFe;UB(YbQ<&1^hAnj708 z69xxjq)N7_b)Z0L_VDu$beFVVqjb)ow8f-R4V3DuM8>3YOp&=7571T*(K+@Be{ z9Rk8C|FEAs51yheOSblr1#oD*Tq1U`(va!C79pl%!ODbdYoYu{u}=Ok%MHH#J||=hy_9 z>TN+GqH(cWGLfHGAdE|;q6@iNOTHv^BDNTPigTlwm0?IX85p_YfQB~YF2;!T-z+#2 zGh`HYD5UlAI&OeR&oDPGf^OTN_U|tFI{w#7=vb(sjecl&cqk(#2Q8pL>^CO9a-3WI zu+O^1h0*XdSbBGb;Q_D1_Ax}K7i?73xUu_RKf9&bRh zv72Tq;ph6yc(B_|QGDr4>(%S70$Y1+1%?)Tx^xc(qAn)*! z!#V}}i02fgvjC>Xd*D(M7oeqbBDiqWY$n;WXN6Wwjk8Tl4T1UloFyBIP0&bV_C+=l z`sA}$Iq%Y9obQ23sR>L|^iQ$`OASNj5l6qQ107*ljGpa=mIlF+rtjN6Oxr>&7-2gK z^j!M$jR)FQfRodzaefAyI6PD_c0c&u<|TByLyIB$8nr5GLOx`s8Tu$JYjFwB!GrOI zeJg4Qw)OM7m<3CP`>3r+H!}ICU={bax-tMS&1hgn37UKFP{T zUEYO-m)vm`#ab1EtxA+87kWWsGzLC}VM*eO$k$UcNl4)91&_h2vzv}BGP7s%#MFgQ zHN*qz-_SXE&K?4v)bdsSa9GRj5!>mP^R=3UfwvH*X^RA>r=lYx^zHn;S>G=O62?r1 z-#ol!J`=Sc)<{REO5`OaA>*}^@o-G6d3H6~7{o>#cbS1BHwNIGw`Kbke?oBa5c)`x ztIfh)pLXogCYn>r2DYQ!zPR4k-}_9KcGdS( z8}P>Ewpy6+k+ZmYE4Sy0ev^VyqtsA_8LMJpt#M3?%V%GkI>4ujW0w|mkEnD0AX_Gx zZaRJ_RKaytoSCpxcIz#*5<6h+Owpo4Ro~Qc`Y8?ItA6Pr*J}Gc@401AK7ul`Hpx8T z=xTOtFe=*G4ny`#C~ZVTnLe3Q{G%YBTqUp}MaKAayE0MKInMl&y+uvN1yK`6K&6VO z>zpHg3s-YJ#0dxPKBDqbssiAO9sLK9{8 zh5Of{fle-69ex*$1$_Bv@zt`@+xnEoMRPgs>*NXd$z1lrPfqGiS0z0@D$pJ>hS0Mm z@{S3MuQIVj)$C4iV(}u(C%pTtusnVkcZ$95k(!}JG!o9-) zGy@c-d?7X>469L(J^?S}gLQ9@0dr`-2uguokO9O&bvMppyF!Vhk~61$G>XRI29^xWN(nj=$@5I+#ve)k=0BSHB!zu# z7=`<-+)&9kCg^(y<-B|^ELVMP*y$OR9WOM?ZSxuXfr&EYA0)$BC*RYQ6V@Q^Zryh$ z-oK2Du+~>TDHhvi#Xz-0x^6+i=b*@Fbzq*^=$Y#FkhesTGt91bLkk<=i9-9bug1-> zob+V0DQFe}_@!teU%IxtKn}287|_#il-kl_q6kqxMgr6h-(bL_(s^kl;S=n#E7Xml zIl%mqeEnJI7yGI_gc!vgkDV|~;_7IBAnU>D>1L6w~HOGn03q!yfEehZ>l$ zOHGEQe{wl}U^L&s85bk(7*Mn`GFtAs$RBY7eMm`*Vqw0$bNGaG_90c`2FYggxU=xX zVAHZ`#f2squiuLaiA`5~R8cpZ+0Im#C1k~Ev>1Q1(@m|fw^(ANN*0gxfqxu6OdLJI z(}8AE%mpNQoqw7$h;MPoXaNNQj7h({9On~z|Hnprp8KJL6EYT`&OuFEQm8MXjc>y1 z7!)=5b5AM&8-X2*X}Kry;kKy2ahd7OenO%6#gtZWF-9pd_rx}Df8`@tJz<3F(=d}hD? zllIZoZPJtz+AATM)WHEh67Vaa_J^l&bkCXvtbE+s6GP_D@MTeA7f6QVM+Jwgfr$Uw zgC7Jz1gFY63D2&#FOht5Q#?-RkKKXW$PmwI8#UwS0V(Igz$x1p!tTTa^rH(yM*cOw z|JTp_c2`hE6`YvCs)FTrQugHEWC-{KKi~I1UlAu)%TMrB#ZNNhX-SJ#_=;f&{Xhb~ zJ`)c*s^iZf{{6G^9$X^zZ+n2P%|A2-m-}VHf)8fm{hK2KpWP?-dVP-Mvn?sWLBPK1 zQTTB%7UJc72UB;n@XhDV5iH`~K|uxyGXzB0kAGree}D1{0t^!W_rvf1JP`kTKQL76 zC<7b1583alPJCv6=bir3fbahM0KX6V;QzOye=fR$_*Yy0`;|JN? zzKVapz>oB=cKG)*7{=k>{KCHn{{MIcnWy``3=+scci10(p^ox$X^9_k%wfzCX}>u{ zV|oYPH*!je4IXeLrgvJA!(c)<03AAomoimch5vWOL82vKna<4?-5o;3+ zoKys%Zd5!P4RAaKeG|)lm*y_lf@{MkL9vzCd`2sI!Ak7!!!pyE_w)Gu`?YBDoNueH za|0t+q+1J6XllwnI+DtDsc@sm$|obLd`pR13=qHX_aDK$LbXh$0-brM=`seOIkmZT z^Jc#Yf#;H2Fk)VjwF%TBWKfZ=wfX%U9ej;}Xlv#)Q~{VM|2kwuNn~L27pa-g6%}0_9pq%K54xSgdynZKSJoEE#Sq}^Wem8BV+Q*VrfVW%aK?}F5u^woZPkw2!$Q*mmFA7;QmKY zt~l!&N<>Ls+m*k5+vmLwQAc5bgHhqWwPRGRfV5PEo+1TZCCJqD`kz#Gq9 zl8s?_ZQG2Z-IDBhay%CZ=bP&8zJYjn_rDV*`QP+(#dB7Bt7@${@H>VTc%>xx{}%oI z3^aC-{OkjODd8T%T618p#7v+c_XyBSA~+h&Onv}vGP`>ja>PKgfD@)X{F9CucRDEM&9O#{ z4IycjL!wCa)X~LGZldfe9jXajx2Gb#XxkZX!+9dhpcT*auvUc0>_XB9KwjbCS_e3X z%8WQ0RP}ZosI^&4$I${#+wzpW<0)XsVL3=&zEz2WaVTcOD8V=5Al1~A2$?|xR2ilf zdG5z$v3evBc{96f?(flY7J7}2WfhJM2fE5T5n^c3?3?b4KyZzjg9RzMn$A+#%{W*9(`Cd@bYKWv_tc#hgJqS>1ugfb@*%y zN1E~~<8rCN7wOhbqKP!|Ok28k<|U+NRcF%3G!gV?BX6p(hi)GiG&lo$NJ2@Kif5+3 zJ{TpLA9w=N52HZEVI{|hW`!z>F;4k6dg+=LRyc>{HU0ix*6mNcIL>dO+PeFtZsFEq z$mBX&TE*>|I5-Cu+(I-Q({J2(w`(h_aK~;jojyQ3u>8*u;UrOz=sF;KTBPi4e~L>^ zPjf$x@%!0id@PS>QhzQB&Sq)aU1{P|en(_-x`%0#h%`F8A|`#Qx}r%Ud4rg%`Xl@E zq=j@?ET=dsI8vgo0yR>@?6QeKEOD){-R~;AH?bhnE+Ok!9pv_WwLQ=O>F{n=Tpu)V zEZ>?WcdXG*r?<<2+hG-c3RbZ8>cyK4H*j{oq=1-M4|E~bIhD?12&r7Jn2mWWE;5B zr5tK%0Z%%O4Ws6VvXC4XEt?_Odegw!5eg1X6A;8sDh_Vf-_6Vw{89zw8x+;CS?4mtU-Tw4sq02X2c}Q}pKaKDu6)71kzvlh>|ck#@r+ zlYA0}y@Ns|-YV86Yl@qvN=4;-cihfG)hS(H-$>3EvaN-N^eZB+aV&$CMqc0GZ8@~^ z3wR7VJ-dSPgioRR#Lp{9FpX|dLgYFqXKDT1xgdW6*3 zg{)1#^E6Gd;5<#oupnM8Z;6?)*Tr7>N)cGyXe-B4r&@`JC>?7j0W~fwLY@-+tES}& z=F6liTIo1y6&igP&GaTNnxZA*D2L~<)u|Uh61R`~p$RZ^?PktL;&fqPMyAZVQk;yq z*~ue9tY7@sLfBV{)p>FMjL&T7}DU@%n3 zYCRW})Nxu<07~KyCx0)&fou46SABTis9@W+G(&635Xqm9K8Q!}i^))V)px{vZLZ(` z#x4TcCE9;d9xM9ZLH~!gw~VS|i@F6#2qb6{GVp_dKLS&5YhsI<_Mtv7 zd#Q~4OvFg;kZ5Q?J3vMn8NNf?=UAh4{VpYCIN?zXB*( z`C@}gxN~JDsTyRYQd?NSIl>rTVOc#rzZ7)WKgi9DTme_e%kRi9z{(R~6c<6jjME{%>{r>P@=_!Ycu zq{Q(Ml^ewmyX?LvDvRAaG1tMZG3Q-yG`PJ&<+ARU;@u8acO`^|NXy#$T=0!S;ty} zwkKJQ7aF+UU`Zrm}6qx}@bEv@rXy-nwh|=`*fbwCxUfUPb&+ zOY4&WDht)hLdYF1@D93!S1~tD+fu^-`J6Y0v#TJN5$-6BqjiX5f3M2{&iMkk7ylWQ zPgYD+G>5MOEY@>6Uwt~YIdfy8KOsm8JeIn*FHXne!{edqWT>`#8_1#N2x8`$giHS# zlur?oi6qctM-5JN;({2a?^8E_90C^)(JU_DA}k%740RCO1&zw8TqaQjQ8U2*nhrJNXq{{>~Ie(_E;?4bgQ6Q*P$+PY%iLa#1x6oh*uwa3ntGT$yv) zG&sYV?YWeG7uWc>7Baa9NGDtA!}Bl97w;4KIf@i7Lka`lZ{6=unW!|IX#Q*!$5Zj8 zwz5u5cC0v6lvy3yDZx0MK~Y)kXme6|hp}-a7By9=ji^tmx;EQ8Vw6aO!H|{z6vGb) zqy7`dJ@iSZvW&azuCM4Kn|+Z7Pw9+48io`)Z8ddP$w%(DS9Y@KEi|_?)sI}$Au6k3 z)V=$zdm=~&9%ISe6DVqFC@a=H6s!dPH{;s(%@ z&l(k0)`np)`1W)fBFU3tN4_qISAj8z3dddC(0@R^4Y(&Y|6$4$zu4=L+d3K=Kvo0s z$}EarB!aY%w&Ol?R#Dfk-1AVPQoaebr_z&OjX)P4%%kP)KQXY!W9Q6;_x|V76Lv?% z>%FXAkV{D8&60Kpl1v0FRB?%p0ZoWR{shtO3}kmF@!@;Fk#hP)nGPhomm68=BC;Mo z2?qpBbZI*U5JV#QlTonfyIfi5KBA2eB;%?W`=A2Pf3O9pTp9rgi*~s}8JAfR*){ zFOT^p6${K+LnCP)7!R@h4F9V z$hq1y`$Gs}QRqhyhH`T);g~0!KJi+Xu1`P$A-b7muzD7)2&Aw^^n1jz{1XTXi1+nq zGUM_77xdqu?BEeq`kBhjvwd5!@PF(}jf{%KqY#eXh@Ux@=Z$nx(BM}tdYRNJo-d~y zZtkmvoa>#IeAp#0Vkh|xJ?>(24?EZiS5P&QT8W@vbgCCPPA<-v;DT{0{hNfRa}^+$ zrXZ#Yyj}keutiwI9I-><{&KomI$2pA_xm$Rr-YJjP9XBsRu7b9L|4qgqG0-McFa_!sK!Tp&@Ad9r?NPKinNyEm?n|bNhdZVbJyr1Cw#4E@Ee|CgDW7{;Ub9*6(w$-IXlJZ&oa0pe0eh01I_Ky+ESMmUnJT=aGVC za`YPntd!yZ*h>KG<~feltbR?GNH2nW3JV#u`@REXIM*0@*XwlqNKI1NymV%|EFXYZ zg`S?7m@{ed82nM^EduP<{NFXMS8J>%e%&QNvB%IjurOF^JigZ^20T%g2d51PA^4-| zi~=%g>v>bO&7>E|`rbj?M<@qAsqRaw0t-0mUsMG6+x_?z%*zBz%YOShD9mOb)kpyd z0R$+HjiAWq37IQ}i?4KJu1xTSGxR-osjewk5=RjlAt}!|NE6+FwIe*iFC=Ldv!&JN z6VMrirMA;tHHbh989yZbwi397dWN{T!z3>5aQXw+!`miTF|wsm<#@iT0SXh&Tfw~# z02oG8Ax?Ey#!fr#WXDFB=XE`46cNNrr@~sqf_din<4GLWaujfHXh81Hed%HMZzZIM zo39CSwAEqF`qz#^5672mbh4r}_U{JI-v*5IQr&IN2=&VFXm(T|hczC{-4smBAU}E~ zz`Mwb(^S7>z5&f7I26j-QG&CO0V~eyKlogF+OOl+Fd!#*miPn6T;j7+#@vfP3(_|6 zbLf426Ss`b$(vYs$9^#pHNtU+gnWPc^J>{$C=nX6fk%p6+mxT!`~{fG-v#~mZqLum zZPh=qyT_7#&ytkMhJBwn#)W*M@z}uH%q-(d81~NALvEjRaOUrE4E|U8JKA{dZ=V0K zLQFeu_roMMAp{!`SZe^}a2;-=gu$Ho(dE@s+N@?TEEqRnFMRzFCR8^$?h5%9TdxgE z90IQ2|1C~tqrivlREJe4zFZ7(0M`Ye+UH9GbBd#)iZz=}DU^S_Y-pPTgm1EQtbj^n zKZq*$Bi-Vv^NrLMa%*_PRD|o5tKT-4{JWn_7A;$*HSfUEMeqas-6y(0Bi^%e*vz*<-f(_M^LZRO!GBVEmpp z4^OziBc&NAF$fHRP2il|oI05Ek(b$++|%P(mU;R)y)IQrNT^R0x^5N>TV0MWgz{@& zFj$8_u^xA6_Xg*~O5#VQJR`29=CtDgVW%5txtByZhA*a&a8L43wuZsXx2^ z1#m}+?{&L0w^~>#AKOcz3NlyE;zW#&?C$PxqPRFT+%{A(icTW2}Z?y>D;{ObB`PfTYmw(u1mOa3IWTKRpA)4j zaMlO#Up?K@d3C^~cyIOs8IwyPI>9xOVqc-9u&%~rIRYSJ*$vo0H;r7Rf5w8r-&x1G z31Y&W_Jfe-S#lrA({mX{xkT6!Bey$Z%ang)T(-VE)jty<{oNbef0QF&=N!YOvqGgX zj@@nP*Y`@!a-8@%9H8w5B@v_9=gAP3IK%sUm(hQq>8RXUDLRo|3R*>Ag0$x1YqBf?MCq^|nB z%Wu{?^&(6PJ_v@patb)fKhmzJNFe>VeF1Jk?e8HL1kRx9mHd^2v2 zegOP;b7ROP^Yg7lC0W)r08cbv6xDcFA}U9ahWNRL7qiW}=`a z?;DvPK4YuSkd`dpQ!+k|13HBUR*Ac>L(Xc4<9g3yLnf9v20O8Lj!rpd{Y#$ZI7(fj zoVx@?EVBfy!N43JgLHywKeFsgkj~2bfsWwAhQvAQXZxREHVF01@(jC^pWAm4vkiIv zlhADYyKD~K-YuiZw(n%ZJh9q%QS7H%lq}95UG-62yxnYJIm01;6n!3;RrDklqwebn z0Ms23f^6%h$$tGt@&V<-qlk^>64c3jyMKez46I?Cw8S8_a1i(QpD z6gV+O`J>HQkjg~jNkA*}sMy5sQcKtZdJC$SF^AxFV(^(-yJcQcoh|Axy}aqQ0TG&v zCUv4=B)>a)j>E`#xb^oEYX2v4K^I$gYpdofj$PPJvV0ML5wd?s4h#ifSA&qX?7D?k z&!O=n*rat{x3Ddi$EEXURm$A3lv@0=+w(@LlS`3qrb`ddWM!d9FgL) z1o=~JQj(#gqfW==e5zib7k>*IG{QW7UgtbN<7cS2;{fu98)S`}n6ySR)M?67r9>bK z${A$-oIGbNqO#v8_bO)IbkAgm>-4VQf6zq^&bD^8 zywQf;gZ)Y{W}n!gg>F<~(4cs)@w_}rO;VL#AlcE>5I&5c@yFjC8NtF+HO5%s2niV zxt|CTA?jK&sI-p)lGv_6-+=0!kkB_)wRF0H%oR~Z>dUi40=c5E<>K7eu?KQBdo?VTLZ(bYHM2V(JM6vYa-7I-+a}{g^E6wD@_U z&piH=_?(|62A73*4meeQ)y#u@<5Ke@501rfiq?cTY76Rnlno<?W3I7pF~!@x22kpL5zcYo=6*f&5V znIby+9nJR2l^Vl<58p7#BeGdy8?ql9;Y^KpX+U5^lwKoCG96_3q}o z)n{YXFu%e74qiO_+~mS0JF=-52}ohx%*Pz#QgHNOVpgbE6ZHRIs0u~q|6iyIv)@$3 zJ`17%sIRNJW&cLDg^;2ZZYW+AkFA~|c208wJLpq1=C=#$yeaFN=|CUdC&>$)rD7Ge zDQ=6k>u$3>iD*LA?@a5}3$%fRBhcSa3eZ*H=&GeRR&3TnECh@Z!P2jI|0ENTfj~&W zsfI`pf7$dgjYw!_gU zrZjTOTArc&q;K&-MBYsDE>jE=3;mk|AQIb4nTu7dSI0MaY#2%HH+;#E`tj4oSKzIe zCKS4HDL^ran_aG+2DrP1^c##6(#CE`*5B0{EJ;l9gL2m`GQ6ma;*Hm~K@Ye!iXr^b zLQfkY*!I7R(%(Vki)hKs+X=9}NIrK*me+=c(NW)&Yd>Je|u+4i5@sOG!?} zK$vJ+KX=s1b?;z5w-IQ9_-Qa}3R3yLm52!p;HXKv97TXvp!-sl1MkJDqx1Hmn=NA^ z2ERqp_`J%v%;-J_ONUJD-sOne&}70~(*BNd6my_3KtG@{HrOJTga*-}oT8k$0+72* z7RRJ%KT0Msm&_)e5J$>YMlBV=$R~@Fe#Gr!#iLup*EB%{w~IB=Swl`r!A_4LEoS~y z@DX?lJ^e-bVVR5g;+hUFS&_4`AK7bviq(Y6URk?7=a@kHLmEXS2!eC6)Eh|I2O5m8 zG?G`w(`M?-$7nn}IOnTZMxazQ1IDU1N4|D`X!bCe(lqNQN({ch;hPyKDKSWb4DA>r zFH3+szLBKo>Gs`!+jLO3AAb=?BieNY5j*H}wpf00w_o=%D&+^kp|f^=bNmR_=JD}K zGRBS;YmE1`O)fhfBT1={Ng=l0W-W&|R);8Mz*|Sx=xW_IN|}HrK72pY{}&d8)G+A9 zbl{EigY#s*M2Mc!Ospbii^vgXTp)g3l+f*h+)*cfaM2Wl+?ew%!0T0MxcT{Qwu~-O z#2YXzus@0|Fi6TY$I8dYT3>l-k_fVx*5-rD+uc#ZeLNsj)fjSUa^~;udTLC4>Ko=p zt%*?$VeTI4B5r9i8!AN4eu8~th{&~hmdV9pe=?tD?4GYA;mfy-x)q76)%85s)~5xZU`(QnbLj{ZF5cxL#{gX-q|J>3Jm<+pZJqHy}tEiZmtY0Dx({w zdF$2d>u_L2ec@+v{W8G#sFoUA5|mt}V~Jzd4>|h1E_dpXr`(^s6qz``;RTZU`|2E- z=*;bzP+HcXH$>P2Z!Bw9Fs{ZRMK%&t=?Z7dYE+PL9!rvH^OS_e6Mcqd5}>D0i9X?` z$X@cYry>S?F)Ug97=fZ3{<|ulfLqr4W_{?*OD4ab|x=sbgz3l%F_txfN$T>S12(>;hE4ghRl+_y) z#_^qB5$4stcLw1V`rOowhJQj+n}Ip=MZ@yVBNZHhakDB@E0K)UT?tXzH=(|s zji-f2k@%TS3X!<5uziL4hDI#T3C9msVQ# z)oo5rxij*2LM$A4#DDJUaEO#f*S=F&p`2hEyA5fRZ*B+o2W2XU9x)ay=) zB2<>pYK^iGv=`rWQ4`QOR5b^X)8m!JbsO!AB+kp>vad3zT|#qq;D-qK{Yh{pg(F|S zqEe<*!HB}8j1&$eSgqPlCp{dH8Vz!6S+Qogi(5O^OzqA^-f&)cCFVOp69R8tgHtZi zk7Ok{bP1ysb{7!NwAgW_w^5`aO)VBrwRP;!tm~5@M$A zJ83|6VD;eOQJbV)HjPC0rKB66*xOe6vzh^!Ay{C-^is#U7IDo^z`dwUH@3D-FtwAA z)g$udVQYfsJ)w*bcwE8@*W$QNC=rcpLGh#S*RTbLZbBX% zSb97l8Y{!IG^|tUY;LZ*Lec7lc7yFO?#tbilVFQP(AGtlrKfxUCfFMo8xR@ogI>V9;}%d&9-Q zPHG678s`Q%kbuqP(-!Jr-MH#H=JM(d)ag<== z$K;?ow9$7m4R6FsdZek2$3lAOBNVYKj`MxtKVbUN;*C9Zr8M|-_$>X|$=pS6+ZWTd zRQnAhvlN=JVkmXXXXeJpZ>@PkLEQDa7hV&?{(7w18VwiTNea3}sW6+FuMXgH^cDyA z^Vi&%2ayjjUZhhw|EC-g7+&Z4S4eQJPmiww^u1?;)99W(B|!SQG!UoUcND@{K&;0v zIs0W&`1ia_b(kq$MT0{ zjho;$Joe9)kN;?+fy4Y2HP7+K?&#yh3K^)3nCGp_hePBMx$}K-kc~)=-qqoxveVT0 zsmIa$k{5V4(=#6dxI_NB3g8DfmEq>COi5KSH4WpSMQ#+t{OgZH0ExGb{Vplqb!V;~ zc0C_JL@)u5$UnPm6OSeqCS@=H3S{6zfKvGry8}Q{n#1*W7*|BjuAn!hWvWZ$Ve$5l zEUw44)YlGPXA@dB%Mf5@f3v4QNBgfH`8ELfu50Sy#Vf;6L@8p=@sRKf8F?5?-+NzW){;IqK70SyRP5`hKpcdtM;xVw_E#zu{FM`^hA4e2+m zR6>q$nrL0`(?Kre&076B&^8nH{al-P|LeEtlBk*_6pj6{udj!aj#!dH?kNBFLzZZ| zn_W97^>>LOOL5+;W6-OEqY~s4@v##o#VwMAortp^ zCiOpj9fRMJSD2P=_nY)Ig?j!R3{0%uGn7vGbY*Ae^2zj|-spwFg`ZKDudkgN|AZVV+N%9`yXc1{YgFMa_5DOk$i>)*6WY z^?k4}iL^%S7n+b{%=Q_<6whDwpnS&mVL#e(J;_Ro=M`vFSL=xV{&J%3)4*4hUo4IX z<#TET`_Z=x^=L`1iGBEf<75*k%6#6^(RN#5C)_bhTvg8$9Fa+%=avWMGwB5TQ8W1| z@b*f9jXvksjPvNxdRmiqy1}sf#nv>u1cn&3$Nlv<$L?~u0aZoiuLFYJ#-9nvJWVY$ z3=Kc;_FEtT8My53{`u~^lzbaty7lYYfn~^_sl(^?D)(9E{(RQwClxFb*J88O1KWaL zubdQ|!Jo#QJKh$wfA>RwIv(c`Sfm23om&jVY+AhYhHt!pMwXBikFsZ8ZUes?(qJ+i z==#xT8TJQ~b*?dvM>r-L1H<39RE1Gz=H2dcQte?`CHUw60OpPK(MNA>(|!xUe~>q% z$0G4DIp^oc$||IbFA<=2t&r>`cw1$O3SKQ`u4sD)KgyA}_+~+Z9e?(gZRW0SjCDsQn)#yKA zNwcwqQiSlME6B@d6l)h*g~E~yJ&|G4(>y)D8N>>|yud$C0_Gw4Z%oboBp2Ijhq9HR zCLua!v`_^BSkq|A1rbU{v5tl~-C&D3A>oMCMw4e}YszYbYIg2B8dqq((_D%e9MhtV zaecC&yyC&{+l(Q^hElSH8qWW+TLK^Z)BZF6RfqPqL2^>-p$jPPVw0bg0 z$ImWrv``4}7OSQ!LXT78mTWegDsb%VSFc0}6(GpQz0KxQNcS=O@5D>J8E6H->E42k z;`}Fk9fd*rOeLm|Kyb0q1z5AeSLp*3+BWDv{r7wq{!?ed-6}(_F$~YMjwvRpvykvU z;9mrqqM)Ok*De|3Hv&y}8shx9Nq$l3qhox=Lo5OoA3CY=DV^_1s$4v-HSEFGHPlsd z{?xXiv>UC5tH~8bHh~>Hhgj}23T>&W6jkoAgtwQDnC)SQ*D2Usu7Or)e#IQv^rfd< zJYyr%TfwGpF*3SZqc9~c7hO_3qBVg7&o4Jtt{1t?Ara)m1d6P(M810V> znz)c{ilikzm4NPilGLt3CZM&y%6`BOAxqy8tzdFaWpW2|Je+C!l)Kh1nfXgEZ`P1j zpgY}^sJ_%aMRj3`^xDY$Vi$PZBYuW+kRUn1;85G#efQ78>pFVN3Sbh&YLVrtp^Bt9f}aNl$~KgcV|w`hC$ z?JM&s*=~fX@I)LTrQ58NB)82j6qF-F=)GtJ0iDK&^?hC|3NvN$-Qx|YW-~!3_Jrx^ zLRs2VvcA6cZTmbJ7aan7kpv=rj)z05dR}D!h%gWV$(kC-FgWx6@eZ>=*|atA@}Hv{ zQ#bc2(BKU*vk287M3CLm25%(Rzcm(a_L9ikaR75P2VKW5T^VBtDEG?_SdKS$TxByX zjsVVJatoPB`z8LEcJ{j`u20xUaA3C_|Cu&${~uv!r^TV5RI0%|i!{KPs*?yiS3@)U z1M<4@@Jeq`CX8?t(jOeU4B|iX__y-Q1R%V|atUL8?eVO$bHgv#)$t`0ApR=mRFTQ} zSqj)72^5+h5`0Q8mfQ&w(HdV<<_vp@JxPIPIzskJKUOFTgV&(Izz~fUlXl&>7=R?w z$rnyuDqI3`H? zCVNnnKNIdNsb>!qlAyBrf1{^WKy>XAcNGcvssr=tJz6%T%y>cAqELAkAbCRggRTJ6 z_rd>!pV-qQkthro!XRo}nN~SITvj5o_!FtWZY`OTY%+Q?o_HMvp9}7ESf6+mw;K_K zuZVKp5ptdd%$Ze&hC2ZKrtEB$-~=UZ8MFI1BKhmmrn>HlP(^ytOoi;>$A1wQydpMBqeCyLM*E6$H1FqKJ~GX55!)ua@?y;>WydOgEkIy*=MI4jlD zJ+%536>kgmRz{vB*&yWG5sC4q2yQ-Fc4>;hA0U;yfe8?eNhU{%WM2<>6_L7hIVY+d z-GUp`Cv!ve$i@UXcac}89zOnfE!q`fio$qw4nob3sl_mB8@wd@t)&mm{H3LryqpXy z+rC!o%ju8S-zS;~WbFJl8NdO^=yO|U!tf{i9TwDH2K4-7unU+dfF*50(QYq|8lJzaOq_F$wc52n;3 zk`YVbS~AtXw7u1SvR6G_CYkPGco{iB$+TG$o2^kS(=0Y*D`HnTYQp{`!pJwCTaAtr z?@DUfW$Jcz);6jw^Km{jFaJLA(K3903j5C9E6rf{#|*CCgNmI=e-LF9JS>~w%|@#| z4aU(ePxSPc+;7xns_&6I_{W;gQcg8eO7YCZ$i^WXn0DA5N~Qs6#wI@f2vH&TAjVr#-q)R$LL zh9Dw*XSMoH&r^fXZ<{knyk}6)o4lFce!|tbaHJA0+f?n~f{|&xg_bTnWH`Ed)CLYL zefi&X2W)P5f$P$PYaFsjF%sz~t6TBMFG<2GMdmZ$Ao}jA18^JV8zD$)Q0eDWvLCTv zM>|=?Sm1U6qU&!(ES-DB_}N~*iuJpX6P#^5+&mD;ZNm~+e_?hR?D-ypBnC^F2DxMf z#`J4tjWWED2VO zT=2PHgmD8P+z&;*F<9njW;Qa%H3dz1Q$5d-I&>vLFb zQg}_KufHX$=Yq$IBI~ zCs7CD+crV7@M=W6F~8jIn6`=JcJ$+9GriP^EBaXI8Wa8$r~?vxFCtjj;@O``Ne#~Q zF6ZP>4WoZl_J)zh4J56sQ5}LW#y`yf`Gw|lqObL_fc#qmr}28e4CAl}a<(vtb<&Ww z`pf$HH!sO;-zIBKP|*db8e*H0LY?N8!I9@x;|gcNWOlyXPg!s>@EWTZ++p?8qs`4n z386avg{uonjAcBCSMER5m$*J81>?Ry==3Z`AX1)A&o7CzU-AsscbcX8oI*lY7)vf+ zI+tGOta*m@s;RqM0ZXF@a+aalD5krf9S#ia>0E@LNB;1lxj5O#gx-GBB&)lzbS6Q= zB?W+(WnJjCW8%Q2G-0ThK180fIBx2k#fsNfbUd|$bgK9MAH@7ee%a_?+)U476KHyA z6wOm*xTe|TQ&&SnCEz-O?wg6|b~s&MIyE_0TSF_N<}>ngIj2M32xgH1>|Fi152}T1 zuV&%6RHqAC!HzOv@Jj?1B@~<3bUaI?MsFv^SY;;^e|3vJ-5(UXan(t##-vcSD!*{H zM?Zvr(5}~j&Zt-r#|9JLf*x&Q`?HM5713}5vi-I2n6W?Z;wrm2U|aG{do5S0kA&D4 zi?}s~m|Qp%-U{(^=<0asNK+S5X=k@4cYF(D#5xQq=rg(8*qn+#VyR5nlcv>XNGPoMV)t<|#@8dwf|g)=w_`cULJV(V4&w85 zRJq@=A;d!pCrCPAoTqMVy$kBp4u7^bqHoAPiXd_CjrhrE7qSR?4zlDv8LPVWuiaO>o3Stt;EuoX9MCa1Vd;(%oIM!F#spu zYhqwc&w9qm4Mf*};p73%t64zJt&pkiQQsAgqND_4lhCot+Wm~pn<8WreI{q*yWmr- z;fWXh@8o*urK$`sGIwG<@5Gqd<*+NO0F2o!9BYpum3Rc&-4!V+O*gk-c!|^e9EnK$ zJ#wE7LD}E}xYvwCxFN)hRNk;^gRBp8X!T`qU!3(d%!7zdt(c`5FK3|6RV6!B_x#wH z>(1^B$3!CaXS2-9t1{4jMa9TpJ*QOQL`P+m3o( zdbZuEUQ8LM;pY;OF)k_?y^Zrq{!lLjog24$bhkm%vZ+K5Uh;$zU4a>0|1*bfkjrkf+Gdd-RBb`d|^)@m9_ea=aB%_}Ie1N!%UvxeA z5{v1p2>A153`bHnUT}`ZCXU6I-&X~cw&7NU^b3mp1^1V(LQIh zfU6WB?Ma@WTC6`6FRjA#MRwU9*|0a=as0kI0(bb1B{O301T zsG!FxLNGUw3zz$>y|6F#fr>t-uPCDLyLmE{BN%t zAuyFa6!%NUEj`E;MFfNdMSHW4_q&B*rC0Gy{6|L!=iykIhvoDl52~+PD6Cn`nij0G zNe7nS8>}@-)b6EB0`^3F{tEt7IsCUUM6@Z{f67S8C5k@)?k=2nh*M+Oh5WKJ~Ez6*zROdoAA|$BVI(C_7Ok3zYOjrsJyxN^>?wQN84Ge zvoq<6sV{$V)|+Zr^UL0|fg>uTTwqNyX=uE>^z*a7ORqva;P1lZ14nxAn-KXOswltq zBU==2b5PWBpwGE=EmilOD!ubiPbu<6g*76j;^tUSG<)};d2TCpec8R5S6Ua08f&GF z@t}U}m!If>{|e3sPWK>FmQ&jY{ASfTr|Q*e>yFBJ)BKO04!<6jn;NhtqhlarU?Fb` z*_n#;uX2Y_zEg&@E3$%-%mGUEBA)Kz*`%HKKkMuFL%-GyeNpL&>2_wbss+AD65kMJ zU@wN>JYVueid3Q?-CsXH@g7!rTTkdz0ezB~O*H+9i`J!d!fQXp)4DI9y*3H;L*tk} z5*;9iX5u*6N^_3{6!e;R4HZsiHVZy+-TZzJF-W%2YDn~sj_`=5YIs+NZ&KUk+|^6( zYCh`T-F~?B>u5-xC%1ht01+?rX&5)M$Au=AD(oD}V5@GonOW?(l}3mw-`O4(*;cny z*I+3&IA`}L5{+3}O(_^y1gNEoqEpSfsl_9Xewri*!PqEI37A@6fU4EcR5UJTFo%kJ z+XT9O%dlr|d+2%5!92MrvxP! zpG3*Xk=s?2tfHhuS+94Mh^#6ZRYV%r6E%=9qidmDVJJ4;-%k*xj4@kMdwph)Ia4+& z>KayDqEcmcaf)<$^0VUMK0|;)BO@c~qHA|l$~55Ve7BaVR={7XqUxhS^Ad-^_oLP) z9)t9~-*%BtLeeLdvIYC#;AfYPV`k4aJa232*$7W~hAJ(QlylQuKX7oPTRhYQ_rXzl z+vy(@MnH!Ci&oHJfJiV64b-i+T7HZHN^Jsvh*7rpZHd9Zf-(3|eO}4$q0!wtcDClz zY_3;AuQVB>Z%`hR*T*4huol{E0i$yrcRI z>RoucNn(yJc@&jT9)F@WO~HGQUEg8E*%oowGQ_MpgEY1}{7x4k z2z!B~ms{$C_~R7?i)N-)77>yHT3Rkm#17f(XS{m*m;KC_JU^!fs(rfx;ly9TC;Sfx zJ(4qjtDTd@_niCbds(K*uK24JIiPwjDk@~cV$DLQ)!sfyPM*CeNredxf&n(oDEs8Y za`|`yvKX#M$(SidNxSoaZ!CBbwFVq)Wg(|y`vfbrI9q+^hy_@H6~cp$=}rpcl44P* zFq{JDl5?nJs7TH=UMUq@W^`XmfuL+HS!g{{z_y}e;>w#HGoB&i?MbfqMK6xiWFK^> zseIaphP*CynrpPMyj|>+to!Ol3%({bF{1;$Sb1QrZmF8oQ?Eu?DWbbJnLw>S=eu=ORc%B6;(D6=$_{ zZZluZX=MnqPe~yusF8C4l)Lw>TTte%X%eGrs=ph2u~4O;pt7{BBKeKMTXEH<(oBli zIld+`XS%rL!pf}^-u2wotFq#h`r}N6oF!EzudDl38Vi~wVi^>a-VL?G*}SUy&2|=2t7ezPz_=6r5wLlPY_oqmu+X^0p==Y(pmwE-hybd~qW< zZ;+26qdKO-Q-+9ITThOyCO=R~2hGAV2_XcECc$6*sA%X{3wf)*d#=kFty*Gp9_OFk z?IfBSqiPxV$s!lAMThiAejQsRlX8=D{j2mieHetiK572cUCR*X)kb9Y*ULmfrK$X!*&1Q)d*cgch_vqEcaTur|#Rvme~ z1haum)#CX$URUelpVan4td3AwUYa0cRIY`D1&Ft_ZjorMVvb@3yrv#i007+Smp17@ z&ouKf@<`5um_lOHVadGBb3CfWOZX(XqIU3)>I#uaOVnu&^LiE%D$L%NMc?Xv?rey0#Q4I zhm~lS)pToWAX`@&vUQ7+I+eB~8EQ)xy))xIhYa5qkDSkTp)tE{0#R3AZ5$+Ac@4Z}~7&p88HY*JZ|G)071d z_b!E{FOHS1CYeRrEeVar)$9A}w`_cbg5A`*;%Ju93DR2i%rm>nRmvok2(iQ6W~@c& zk&K&=?$;5B-MNpbXA43&LnXzSa!|q(NuoBhTUs6FijUQ-E|)155013f#y|6H7v6f7 zhUoH}2Ti}e4bjHyZh@R{tiR@IV7yWF4Sy0;=VaG~T!-fjo@5(~ z%#RvPCjc7nsLsDBkFUA*mRB%L4QoY;5>pNl=SjgGdY(l>HDfqRINMG6j3kHQd}LW% zhc&?xr>!YG^jA2v6r6jn@ zWS&`|N5eDXp^q>d$h3C0syvFvo;}yf1N5BH1><2i(y9C98^wnleD3gbGMdUFDS(xUOIkf+I(~8 z(vaEwgu|GVS1hc$sm19yHTEsu2gv=pF{sl}gRqF{izE&T;0OJrzUpyCPAq%n7h$@F zKwp$u9*3|@Mu5_*wty-1r9PtOi%*7867g0vIc8|z{x&|_V#4xdU+H?z{%5R;wxRi` zR1^r%QcPlx+PMToJKE}E%Lk$NVChoNFk5;`ouZu^L39&Ni|Sm-Q{;eTsV2RlE2XuV zFB8wluShctk8bt!lkPX>i>=480UemGt26U@>_KQS-dxwzyWVfTw_rZaI14c(ybnFv zr!?ttm356Cs}rc?lwMTKU5?gJZGSb{Tv#a*R9oZ*DwWWvQ)muxQN2~xBP7i!2;#Hf ztG`u_ywZq zMLiK|OLf9C$VK*=4Hf}CvX_eqL6D6)u(J{R15e5_S7w`5hHvfOrQ}YxB@X6fh&Z8} zNZX#lg^(lE*L-Q=w3SiWEmzB?yR)kO`EVm949fn3Wuh!tH_ldmL!t4baT~gL-KL#J z`Ug9TSp!nhf^ml3*KAU;*q$Ae@CkTD%9BCmTOP0-&CrJRn@Ym{D#wfzr-O@fgzNAD zr9;w-_NnxHm5xhD{)nKQ2Eh(F4bG%~!O+x~uArbdsf(FIgAJhb3IYMeBkH)97wg59 zZ?zm9Z>>PatNE6TJB1AeQB4i;{EAr;BH1t_rjP-WeJZu)qC1!RL2@&CFw5p{)RlZ* z55m3_i>^FC@yYDTMk#|EJoG(pg6oF{=?k=e`APR0o4J}Tazz2p+X3(kBrBTf;>BC8 zI*i4Bjb}aY*Od99Zu=H5oSIT2Z>b5$6 zQFN>)Fxpwz=$TZKR#%w|yu_3SywAF-p+#BjRjrt{724Q`_SkwBc&Nv@2srUu_mgyS zhSq9x6rK%Cew72w4RjK6e-T?917TYf%DxB1RW5wzQs`efHUP1yIAs-6?`^LXCtKGJ zgqFxXylJoz(=$G6tU|tvjampWYg_7AUCm<4Mi3pIV@~Th@*Q&6OPkFOa?>d6(MmJaqFWaWU(4U z0g4u@Q}XI5JhoaGd(ey&ifDJ&uLgFsZS}^=8E#B@bxZt0)ZnhxlJ3+8U}DBNzNoo#oj-m|Ay{ zaX4OM*ss}&j^4k(llrvv`C7)e-9pvY%7oalJr=O=5&u$4#L8{kTAefzcNn@L99m%Q zp?cMFcUVSPaGs!x5iTv?mCDGnWiov5_9L=7WgzNKzMMUbPHaU_&=%ex8Tl~FyT(vX zdJ{&RUKUN;KunVx!=oeCMFlQuDdnN$GxiZIU!@%tyQ=*C?#T-27uMzLZE7~B zs-~zoWAPVJ&`~Pg9~EWWL|#ujK-g)JSjJ`9ixV8r<8prt9Y_AZ*n7*UD8KJ-SW!f! z6zNm}rMpw41Oz08PKhChj-dq+rEAEcbB3;=K|pHg?(U&m`X2p$f4}>Gul2lmUOvxS z_j$ou%sSU~ookd#+gmM>}gm1p4xF@1>Y(zgSb)FN)MktwNif1UFVL zPKva)fbil0=dE&Pj(ABvbm5V3XQV5V$^l7D~6qT3YV%mN}l5)vn2bDCaeS3R8H>m_rQ`fM4c_0DVMmwYcFJ zP4t-+!ha>QzKV&@=`V0WU0IjIBuz+as(&+GO(m&0N_xL`QJfov6(iyVOBf|_bT5sT z!pLP`_g86DZ4^D<4K{IQA8@kkzJS)?CsG|P;lW!qu7UfW7}Y8^N0O3e!$+Sys!K^$ z*`HRpdi~t#z<2$WmO+p8fhxWOCB8jawgS&=>W@| zqn6tP?QcBmjcX@neQ}ohF2kL*C3EGtciM-)?y)wIZfUNh)tU)G`w+8M3HGWxK>9i%rFZAz6!9I&1P zW#Q0|b^mPrQX?+X8=(V9tO)D9DAX5ZBAS87V&fO9pGHe@mgCNo?aG*ke@_Pa=B$!l z^~ce4e4DYdy|x%g5q^;HAvI~lrnxFnJvK6SRKV7IVWHdnD0=Rs{&-HTOAg<6)|@8n zOnb4wNoN|-;v#?)0=pOw4{KAy39~llCj%oI*=H}@Q-y=YuYTKNpMo}$YNe%1)W$lq zF~SZlN#!l4H4q+?MS}K=`--yuoaIl%p3(GDdV;NzYT6fytJ9!OfSsd5py$mfB+&X7 zw=pekyy8a43k{E{quM?nWu%iq6;mWG^HVmr?rD{=JLlQ=%;j98x^+^wPC;GN>8zM%_&6YqXbY8Fk1Z_Y{&sPM!!a_=H-{}omx~$9Ou{B`mf_*9@Ekl30{n+!@h~BLsMQ=e%L7vlZo-~KjEW+RmKjM^7tG^d{h&@*;&{$w9F}KW zK+`1>L^MuHg6o-e<33Vv+`T1ecLJwlTZvIFvyO;ullB=On^mZh8bx*_DA3TVp%p ztJ&C?ihIqUYSc&`mGTP-LPrUTl3$g#enjMijCsCN+{b%%T}{ktigB^$l;U7lQ28?I zdfwIex9S`iI}6rA)0|hYjgNMpc)Hlo4tT3@JP|JdIXacn&M@3Wzz=_212WWHN(%Ki z*W>Eb3>Fw-t3+!?~eI|0{A!d$IgJ7*5?XuO5)0sjn-%_e1Ngm8@=nW zr2T3H^M|=g&RmD-6b~KV>!4knmaiJu|rW_7s zhJ{B$7as_Eo>E@2nNfg^@)i^n@%bmlx&3B^Vtu^>c3!;)t2CsD>S0S>yQDGb$(B4j zb9FNQQkl~dC*+S_cY=%bTqDGK*22-8kS<@N0Hy5u%jCX;B+2)6{*+To<0o1aPiDI; z`=j_o2bdiRrnVn+9<_ZCji;jB0EGQ0UU6oA%{w(^;}|f;C9t}?ey&=pvA-)`Dl@iN zkjU1EaV+XeXxQYvadsi;fheGzc3>R;T+jAazcVM)#A3VpBgEBaYu-Dev^+&wEG;IO z-Kr#M$C~earDiaqXs#V|SomYQ2BTny&)MPZZe6)q-dxfHVG`}ln!N@E$Ja+s#~`%| zmOhr2wISImUSs<#Iyo5zkUOiKB>#_y7411lY)4u}AcT$q@M&7;Y?{IM=7|oc!n+8hkrjx zhMt9;fTWZ08BV@#q({4h81Y4z{~Z!JH6?6*~WQd~>R-pGE{q>I~S z%aRKl77II_K>pTzQ7YSpRbrRTYZ6&;o4D#t;-w)EJbqv3<+yPSTQRc#O__v3RX`_g zir<>Sh^J(!iAd)7R&C-;)(OB@%uv7LCi#Xm&Dhjfy4Vkbe8l%seF_s#_MkX<9{a0# zCUf4k&Wg7MSo%qA!=|s$1V|!1wf;cV9&?Wlt<Al3Gep%OB5@$g-p345T< zJ$J8Z>3G03oN1u?(+#v))u;K8-lxVj7`L$aMz*4!Z+A<4qAMk39M5jvziyghH8%)S z)G=N)ztUM$1Q?C^K7>B>6+oiC8ETyE>S(vGW#^beU%ip#t@4Jk;Rev86~c-pn(#|h z^D6WVJj0}PVj)wCZc$e9tmAG4Ui$sJGj2R@r^4XT&_?Uo$r;}x^a)eaO`$409KkE7 zqdbv--`nl+keT&&%0q`i@iUYqr=(ZDBY+L7}u@tCa|4+lAbU4WAv)@ z(sg3iHudL3;ui}g#*(;H)@M09#Rl@mn&zq}JF3*mE)iNZ#b5T0Wf=?1%dtnT9~bIO zZ+1wLtHplSfzko~Y>)u|oeak(R^Ij+1=owYJwQK!))suDjf_8gz>=I+{cuhoKc z;wb9}#>#ac%$HZG;u}MBGX?!2dOE%9^6uDqs6L4EHx0UzIdYuS^ zXrQcs5UV0XJ_yE0AnOyb+EWNxMFv^o}j3plj(&9$L1r8a<8TdA_bbJ3Pk&&ndE4 zxgSW4roTC_OM@iT*?J5f2-k0*t0cYtnDA)vcSN%q7WA?NXd4Ym@Fw$12|f z`aU3I33)1}w(u<2ER7oy&B%?j!o!gCaEn|sV0CqFkfJC~uv zxBi0C{Nj=&Y|wo~eb~$bYYec_!RiLJ5u0DarAae;9#*o@f{z`jH&(jiFd-Z83bk$G zS?6gxaz8NjaArS(JT{@u?(*hbd{R^yQ<2!XmFM2uHM=6xdV+iJDE_GIlrZ2n30h?@ zLEvKktC_ekR@{Rk{D_4^gKxjTXsmpgv~xl69X(jalPY&JSe@z#Ow^z1l@iyDf96W{ zw7RLeRQ>JPV5SM^Ca)(heX5B*(vkGAu5G50-{&Msb7`H@{(r!OX=YPfjJ%foQ$#7;S$ z75~JUw*ZtRG)6>KJ=hv8%`DY(kt<;^qZk8EYEw7}=y}F}GLNH;{?Og7G&URDc9_O# zX=DoH5h?ao4N!#~TA#ANm#O#is@e2|!`NIBOT?O7YCBh9{XH>k&O26i#71UHt$c%5 zL<-tvcHP$fJqXHu_~Fl3h+GI&N*rilFYbQl$sxdgd-wNrU@? ztkxXgJkH(jWXdg_3WPjusB=1NbN{Z?p$CZ3qzUV6Z6nvFxd^XUM0$pyeCE!!jby95A`+yma=p_mg z?*6h%{wWpwrSTv?S=Rvrx{K8zv1%`K#QNb^ZOL4{eu(7W{y_y6Y=6x`3a@;Lf$5+*~C-rpGQa|7P0f-QTp(lRsHV z=>-SCd{CIf>cq6zyC13d$ienUgDMP5zya2T|n_eZ0~Y1y|@Cj4xgJ7ix5X2*U>HF4=&?fJSPtq z{qrE8&BsiQf+qQnmi68t()WSFztqRbSEhI4_9dK&seLhiFo8y0Wjk_dXYw8JbzJLj zn0tIZ^+$zw4C3>Op9j~WWD-F{mJ~6YyRK6zURA4u>-X?=9cc!8$NTQu8U^DOmWg$o zQDNM3vACjx4iP!J_k=lwWt*-l{O1>(2GOMiFC^qVht(T~;Cik%CHk?k1x>MX8Z0OJ z`{8q8J4Ei+4sgR=U+=6OeFIH>iQ){AZHKVx4B606bKlvEKv(-Wo{LXsNxTZC$0+%C zS%@O;2!5bY^^6lm*yv(m!`-#?Yc9~bBLalDIJ@pxo~{W~LHWTpIf_Y79v&7{pMJQy zb^h^U6^KD8=YUO1j%;G13)9EcAl_A4Ce!xt@e%;i!#*WOtDc^w3V+ShiJ+*`SMPK7 zZI9N3(o0G{!SH&hADNat?}r1TcXu-;X}BLzg3{D582X&%KYCi05au4!Tg7Bszx98L zz7|k)*;oeI;z((Z>r)ghEK#m~+O6V9(S?*aoyuz=YN_OJ-^qXUDo)@LBO&$ zdha*bMiw#D8XOuoAcrWvVh)yCT|o|mpl4OBKi^eVGs9i> z*vu@=%@wE~g;VcJWQH#)yS~|Yr?b>9Y^X4tpk+PCcuRNmOzuouTN;*7#K+dZTeHY_ zi^wWbar5a~TK#@>s_(<>q$c;Rt=&l`nL^Eyv(g1%hO*`*NYp^U*?{8Ak%&88*_r0g zqN>%DOY`0HlMve$M20QHt+o9qB9eZXOk7xm?hMmv0v*l z*2%xe<~55XpMGSgimM8!<25&|##cQp5?FDU)>qDXhEjOp3Kyb=?z353rA)hWJD%yu z%@@DqjUUaJx6TeZ_u_+p|1kivKBk=4vUY@P28-GG)Xg)wi#uz}bUL~`3IJd!=N9m; z7VQ)zpO~6@Fc#lz!(U|EZ)=Fsrgw#R{#q@~u6kfVmNvU3Ua}Mi#SX>{> z;Q0t@uQG6>)HPidkoCTp^Z1Zyy=IpAJ8ac`!0>k*8AX+Dik^$QBJT(vg6$PtM)0!9 zrBWuj`7`esBfMf^QOHv71Vm6BBXmVSGDcak*SeP=Kv4bB&}C6bfS5={|nJ2zK zN$Ge=cZx8sqg-KUVLDydl{0$UCfM<5-JZGMZeJ&5*RV9U%K17I)i3Cl=X@&O#tSds z(||G1*E{VwC)Hp|#auNIIVh0Wu@8Fm)DvY|7+Q(b*_PDE1Pc{Dn6cn_`%=EDye~~x zGm#1%TP97a%=zdxL!_ujQ6vG^a2ttS)WD!|eITzUxj;HV!|O;WIwf#*$zqg;jxYAk z%BNgqJALxpy1}Hb@75QY>2lMrbFm>VUMxEQ5C)rkI}XmOawfA16YG)i* z-<=+zT6;$;`t}s~=0!^;8&Nl*1thFjsr*5$u2QmFPX4nlwziP6l_vQ&yJ9Pq=!0%f7e^&VyYufq=7Q?3zro?M z{`jM!!9fpTB> zOl@+5V*{{moIAT;P)e$Gdp}JD+Gri)+#e_^6Bp{imPN6y#7!nByPg;$CcsA@3i#vpO+fbrkGghalm?@0E1!PX z$ibc5P;&i@AGWfpvI1lU?w37vr=1L{483zzFSLqeQrW!3Ls(nV7n~0}ePvU0&Ao<8 zsrD)*$|b*!>rWCH1Db$jvHFmquV5t|KjswRJC6sLL3X8Ksr?ZQvRTO@`kp#zo$g125C`wBc z-+1~VGnNpmP~y$bf&HR#0#T|Q)tmWH7f%y_9F;z~ZZlx-ERmxxKoqGl^7-iuqJrEo zsNJdB`sl^8J-RRO0j)TQ@<@kq(XV7Ql<+xp_A!=uj5joYpig@%n@lzSijUjB34>p$ zX9cho5v||DOY+1ic}KS3{A`at&4bbs>Dc)MjvUjHU=S(!+6NcrjHq%HCOYfzEk zsUqX&79gS!-b{NA^WS{^gDeR6?jY>IrN{8(qnIOw2Qt9<=gsMA#H zgTI*e!N26I)16ELqRhyw!|F*Ci@$nG_u`X3F7nIsb$+?AP`z>LYZlO}E2eVQ%PkaV z37Fd-e{IKQM9{rHV;6aH=T+!)o-{mLImemIH>{>-1#)1tJZ&{}CYKL3*XHP4fv0o+dF894+J78@99z^J%V%&g=1q`P}2(NA)iDzuFb;d~HpC@*7HIO8$L ze=$IRJnvFjt3q+Pjvo2|MR>G;rIjprYf>sO@AJ4bq?PZltc8m`61>24Qsb zs-I4^?7hBJg~rsX4h}c1+XD+#bC?-==|jMXGKm=!<^<@Q)gX0;Fk+lemAl+^@M8k+ zu0u6$FR@Rdync+Ln8Jjmfda`$KV1n!S9-}$@|a5_nAo%K*f(+TxO}v^8)4-m6^+bs zsL`Zmo4;~p7MEls_OFGjuL0y_{NcHg!L3d|Zj~<=ijsCIiuK(uw-iC}>L@v<1Zxt} zU8O!1--+c!@-5*P)5k<1+ciozgjR+>5~>u1T8o(%*!MR3bBY%1^>HwxP##i^Er)a- zAp~4geM^Gj!*7M&SdG;lm!m=qg3x7Qa_bi_L zH7VL(Jbas0!gw?Ar3P}J8UuTbigEto7bBm8a7k$Z8o?exTei@iHJ zLyXR9m;Kop)bHzXA{kG-_mWf_lP;}KiR2|!kK>ulMLmq0-b?q3;b!=QJob31n?Xfu zTbgq#!WT`L_1)%PbNo&-)nPcrW>F~%>sqx?8pHs8KndWHr%qp|TZN*@2>xOU*>2y<*{-Z`2mFtvf8fa#!GkiDP7{+zaZZAIW$f zF@5u}D?F5`(EoP$@J*ePfK}LrmwPO{`$=-Z27fNW)&x{ycL1W|aOD(Svi?vf@-wSm z!8x9fN=+JEJB;MZ#rD~{9-ACqY395wN4&Num1iNZwG#JpDRbg9v3CG0fM_Pu^J8L? z@ZuK<_O-oN=-8=2b`G-)wvNh8Z3pZAKxcE+4^|gUHsMBJ+0FM7D{Ehh*0DmE`)MBA z!Uh;=J6&W$Lh!tvEy|=SP%)x@Xq(YvX)5KU0*yqX8Tm(!_jVhEsv3ZWBH}t-&@-=i zi+FRnSmMfq@*)A^h(^K!dqbU?fTrT&W9wN<(EVh*I6|OwpVcf@-+?~gS}fI^1OV5O zm#@O*c)@E^kF%Dr;_ZV0{{k0njjR!(cC7J>zLx{J;%?(R_zCl^OJjl7`&7H{$D@JI zMZ8syh1#4Bj?{eX1ldRV^Xb;gG~I7!Je6tY3X2^^Ngi0mm@Eu+c~<2*;;zJO_v2*o z=`V)LMtYyRBV2{3tt^&EI-&DPkdO;U0!XBqMl{jZWNykp)upErz+v0~UdO9JyuNy2 zWT#}=opHXo4P}$jK!bpog7G*P%DdL0WB^X6`pzJlx0E9KI!9$r*vup0C0ZwH!3q_d zwr}4)5FCINAj2<+J64@}`78!D#KMYY?;5>$*tERmEAl!smJ8?|uvtTEBop{aGp{n| zkticmlDg(qu2W3jN#zxcb{}BY0)ybBlCpK4hK+PR{@+8LpTU0$V{T3~6&D&%Q1LJ2 za7G#X(o2ixI-fH{usSays-2tT`_EK4i~w)?lG<@T?KfntM$SL7I$G(md7n7MJ3nPe zTte=QiROWpTaf|Vtxt-VKpV~zT8ywS<>UosfcWIq44%YT@#J)YO< zwLUs0V#VpszOIw%mC?}f;<5ivCqFcM4VtRN&%hK4xJRLGLjYooWu78aln1w*(f7QV z{IaNzF>kT|^6ozu+!OuFqwi<%tINSSo0S4>BZ*Ae)CDR3?#TX1XFg3@_Oq?U>u6cv zNlf-KnBL~hni4?$pNrEY{$g9x`ImHxKKA@L`V^)BP-7B|Cakce<)$|yj7Ls7a#!cC zw0kaqjDP>3Q2E>Z=JEa*xO+fTo&YkXuD_b=-n*C|w)RIC@a4irnuj%lPg%%2Qa=KT@CbdIAh#zF}dhCVziPANk|Qc<_h4X|!2<9#Hx({|)QaR`D{dC7?5u zS`J;k6a7vVe>(=24LN-q5IbI+_;wX+_~H5AclM(`-@CiE4j2v<5AOX_5a1ojpM29t z&qkiN<@@^n-l7)%I|KL92k_nY)$YQ{!q7g(Ki&Ss8y`Ijx$DfkZFIkQ`k!tQu@@0- zHSrz)jAoDs9ftC||LdX(HrkA?4}kJ12L4N_lazJa^X{f|iR?iDnVRiC!*RZk280>i zD=Q-h@?V&t#G!`r%>NGb9rvg&S^k|suYD<*IoDUhKQH$0Scwr5V{8^yizSR6tm%D#_vy&2tJ15`QcGPj~Y0G^RLj&21xV zQ7X~Dgka8`e+Kds?3RIbYNu_+dzz;F&u#Acx$J$u@a>N!r4X&=>WcVh9PxiP$|K5x zAM6+GPz6kMXzv<%H7j%Z-F{xlx*xQ^ruT`&FJ%pcB{2k`?9+MH3Z&@}jWkl3a> zhvD0kmufaO8Qm`H(f3j17LS;3x|_Khn4kM84OqK-Kr>8`#z{jyjO1hu;%Tp> zRPA|X-CjsJtkLC4G{aXmjWNSIdIOw9JVNLi)zeSE_{|(3{{RtMHu_GDjMan%QZ}04 z_1Np7+LG_y&AjJ&Zwl5UDez^3pe-*@w4dl%VzK~eJcSLsoiz*plxjU+ zO#1vZ=zgPC=04TH0&l{4cQM7ql`ziE9rL4)o<^e(9_3-{-`<=Z%v#H+E?=o&=67>1 zH_)pO^(H?zS4IxwUZs4!ysyVy*1$Lk(Ma2uo06@;4DU|77OEL6EUfCR#J}%6_!$Z> zj9!lAnXA2$<;!!*3$KnK7qdlvWlIqr;_B197=m*H;SenQr7G)*Tx+1^z~ScMNS<;U z`ud`A`uK;ZH4TkN_wK#qAVq^*BPm6$x2I2&CqR?s=Xnp1EPtP2u~odk;I_OE3aJ5q z2s&#)7MkO=sgsa>gWZ!tU*+fVlY{7ze|9*lbj04A zhbw+m-kgxL#j7ke%5N|>E*!P{-#I>F-aGX({*Q+DBgXhYZd)UDLGyP2|NdEg-HLwD z|7biI=)3%n|KHL6Y(xV;|7kkkQC5^&wfX-3*~y2c4;`uB5oBA_PM{N|ei|qIJstk1 zC;5H0%fGa2|M?;8UK`rW_rKfV<3A()AMO8k_(vH3R}TL_lY_m))JR{3M6X{WSjmjo zkEB1da04ilbSalZ&%o0w;WcR4yd~BgO5#LkNjOR8ZfdaeM-0M2g6!sPs%cTGx zPT7km7o2*803tkah|um^a{JygdJ8eZl~OTr?xX&$P3wM@d0G&)sDDy@Y6i6#XLBr+ntp{R5@NW$O)m!e+S0 zKW56`#v_8(D|$Qt>4Bi3oDG6w=2&nkC%ah|bg)RR& zCd!5BtzoC|r+N<*mnH#)p#>j9Do%1@SxTK?OlL&Q;JW6E4NC~GXW2|jFqg)t&r%ya z3_DAvw_gRT+WAIt-NF1=AVvbbQ`tmmjpi+X>vR$@c~s1Cs=3bHUj-Y=on#4B$J{Fo zKmDNQ!>1f?*C6Agqp{xgqqck^aR9 zt;~}iD)^pWr)j`j>l|N&scLt0N{dPcNP)GjZ^E7diUv~a!=mGQi7YuK+qdZ+ZOeA# zXzjL_XoX%z*5Teii z0aRL>y)T?EDDYADBPpo?`M&AR!khJXz!Yg_;(m1v99-^!k9r`NR6y!UG1BsigZ6YQ zG0IKmB<7BMWPniccq_CaMXbcs+*8hg&)FQ0O5aUK4LFDzrcqpY(=bv&bXKLkbqthU zAcOg&rVqA6eeubBDm#DlEAhH9@SAl1hu*Gg%^ERv>G_qDx{NYrh_)an$}n0{Ywt8P z@u{qj%yMYg_xs4tC$OR`?|+#!(e0EsLiyubUv~ai*XxeM%i_G{w-Ydy1K( z`drY=6&3utBvI?6pev2LH*Wf~^Y46 z{#$wb)Os8jU*rHA(cm9)E!xx4G6@)hA(U-1p|g79OUxcr~JD77i-r3L4ePe$Y+Ej+&K^ z^ebo{z@qX3^+YJn*Z-)CXoBpF|zx#FK?r4F3XDM;-22J3!hMylz=Pd$4HMIRpo_VxBadxG{%BkIn4l6B7=9 zHUmGI&YRtOn@dQOXV%Ejci8CRtB_QrY5-?(D(4y?xxsEfie`hHxnfzy_~-NJzFkir_?oMlT*H9iHs5ZK+_UY&Jt3d&VFgisXPW9 z(k@48Jy%NJsnSYZ)&#RE1|JPXd&Lt=&XyBQ&!4K+Ig2zZ>7BhY(^$e-i`deqP%1ca z5sM^D+pbUdSYe7&O?B5y$YxmvlKJW-|M33GoinBE(I@Uby8OAika-|XGj+JVRZ{leYIWC}%VJaVix1 zZk}0uqpeDW9x*FUaHKL@X?6ps_jcsH&Ire_&1NC<)i|myaIDD+pefl@d_~E|Gp5VI3L{VyoELSSyC`cloy_ft$iVdoF1Saytgwqf~ukrUp}Q z_7UDLmkEWfy|7~1yZ)-JMBVHcVwBb|Mvij?L0|$(>Z^qPSFg-Vc)fEJkt258Bu%uw zaT%O)oNapG=}t=VF#yvjdU+7D@~-!)D>Y?Qqd{mXjNl4h%NoQ>uI2S{d@f~D8nYd; zPbxTFd!bJrn6H=pI@vigBVNPso>2`7d@G-(n5Mq?fzQ)hzbRCmjte`!om9&k=X=3{ zZaK0%&!fj?A|xnpGWf+eB=>e}?Q0#ux!tnL9HMUSg(&Xev9*>A5N{w+#JO!^%!Oxu zxtR6$>!60xgjaid>|*G&b_1vmVpPV&(R``0^{eMFjJiybm>6zynr=~jt@T-+t^7^c zNw0=%e`uG*(nHxCt29>TjHmBUI1{B4Ij`S`ytUCT(eBY|HCFUeoQ@XKSHFH;QShst zWb1N0RKZ*KN1cYRCPiv3yKruVi1Jcb(y+3n_-RMJ9C5`J#esJ5-(Ko#p=dAS;_s(swcOVoGCW~LW7`cnCjQW-@ zJL0&zMx93(8{a^kRZNk#T?Y3!3pG&24*Q5)^F4Rht}XWh@0mPW8C~to97b~Ubmp+I z%ZX}Hq@qco>LyixXXv$^sj4%X!1B+4%*2~3Ef&N_Q{2EPy z(tTWw8$aIM_eSX}WUaoIlJOae0dj>{E(HT&c?2!z;K zYcw|$vOzU+V7DMt3U{zaHGxaKc`Ie#+oz@p@N)YvT%V0~7DZ#2#T7n<<9Xsx zKRkl(Nj5EP9B3gQ4QMKi;61yQXhin|g!2&it7H<9|hl;qZjY93y|Oqk-_k-$h7& zcD7T|pyVAf7t^aZzSh#xAay}G9Qr7EAPH&uCJC0N00 z)bkBiM2KQG?e7b)wCCDa5;1X?u_s$dQ(O~W^I*sBUk9-y1g*j<)A}DqXXzc}J;3Jk zKJ-%-=-tg6-6t9=3-wWL(0Nv|lcYD7k}N^vWbrxEe(rsgu_~qahYj(<${Up5M4=$D zBJPtk(vn$ED{$&5M~=uzn0$|0{Y;$6+N_pZzOlWHRa2XC`=aCL_W=)g>(N#c{dyNk zXAG#7&S1LuR_jaecZz)0p%z+`Wk`!T@Q|%6E-=|7RwWXOU$i%-ysVeTYRILWmrT7@ z%lpGp!Z%rr2U+DYRb$E%=qY9 z&QyT|3W^7gSBb}}#?0#BX4_e`qfTPv9mLj5^w(yN7Mu6vS7U#Zo*o{;kU?NMj_QL=E^=oIdp{lv! z53*HqQkT6h`A`&)=dJl}zGfr2g`D~pP}Nt6+TYIBDf53#RJZG*qp zBFhxWnua2%)^F%&2|nX!vJY6y%-C-@jw#Qa$+uRQzW@d8r_{$jk2=t=rJnJZ1^wXq zgsv|m3SE^37|)Aa#4`2oto|EDH%p6?4T9n*C{V^g*pGAgE7o z%MNh!vI0yQ*j=^W4?Tg5bL|6$Es5YiJWZz>iXCA`%`|-HHWRp2`5Pi@YlUvUcUMJI zNK3CLaSmeNGOxKv$AmxHGBa=ZB9O+0l8lXl4RHiTcurH48(@E8(Zpp^b$;}8UcKoqxFljFi6(vZwOQZ0*bN3vsS~govnD1d`&>?M0pOx| zmT-y7Zy2)1XKgUwxgn}%03|93U)nq|*e88E`?EXd_o2RSr<>xTTC#jRqoi{Z1xu}g zYdA0$b8wv_IcAJlTB~pz@twIxqo3|*cGjhlA28?C!7I)&&1G1P+GiJMwq;ZSu~xi7 z4OQyZ56tAFX_}7Jqxm#Idsm%nmnfP09A<%u*vq}NFl;z~c$h*zBW&eXxdb{d-5a1wJ`x66eV-Mf)*6|i)8XpM;DoPgfFy2&D#XJ-V9%COQ%J8b|TIirD=ua?wCxGG|aQucqn zEjeQ#5It$?Nn!l8&5A^|bEwp=#IuR_px_zT!M^isF{CdWU%i(b3lZb_D8~y6}8d0^P}CbntBmBcPUM8p0YrMdH-HIBP?Hw+3lBGye1R^yyu=jUBkg8 z^4itsn+vUrV9Hq)nGgZ zWD!gc09)*)7IzkS^x!VY}ZIIDZ2y(%M&W{QSIP1ru|;`{M(IH4mM1~nB1P&0aY zxXM`lADzF@MtJ8TF_jUmw**y}^|2}(YOtrp%Rs+uB?02(?S>Al0LI_ZD<`KNODH~+ zKjDI1YzHiAcUW)W;P%cLt9xUzC#vnTHT=hGRr!=O*I#I4e&EjEp-@RVuQ4S<4l}tT zCN~M^6>e>oS{0vgU`M`lT#huR0!;`PYWp`95=n8kO514AxYk$gQs1T`in*<*7Q@xd z%3N&*N2kJ>x3ly<*>UCds;(%F6IKGm70`4HFD$V7CS^bKHl&AC)PKP`pxkykiZwMt zL9+%FDuxyiz1_$9)as*0$Fyb|h6`naYC*M8A^ zYKsp?f}<$SH?y*xZT0Ev;!tj_XlDNcLl)d|d8py3+ejPfsg}-OF%oSfdXW9htc}QB zd4_gBRto%a%mqH;ydgMd|0Vr6DK89+G=PY`smP~uQhDDBU9AMrn?V>=PHRIL^mrHs z&;lA%N4tqIviWf|R1x}SBsuuol|_K?rDR}xBKuG3aBq`uj*)IkUWG!0&mT+rp0X6Y z_H4t`a~`39jCyoRljFn&;)KcRswHGYU80OxpPFKCdukzA$ejOmJEriqWX3SH2&%r?tEHw@{VHqG=yWbquSL%|>Z)_D5`GXqR0E0C+~FLC@R0$Q0!vnV^#y;x?w5 z2{}D&3nlSRfVb%W@BsLh_@A1@IEIYsdNC%W<&#biv-jFQhJ%XYOf<`eosx)w&(GoZ zY>*H!FOfOW)U&Ft!m|MiDkL{rFg!%)mGh1)h$)YE)1aqpd}ZUzMKi}0A}Lg*sY(uv zz1z-16?)LsIeC`Ue}ae=H<$3sFw_&f2;-qK5y$j&BJTxTw|ko=W!;4n-{fc6nZ3oB zR^Y^o0H`u^J9#h6_9Ui7DKXyI9NWwDk~SJAMAsR$Dh?qR+l9e_xLQZmYuO{4`nx10 z*qqAhpyAbPS%Z`g7!#%IX(+LY#iAx{ENLc<#%(?bEaPf2*T5>!Dw0yZ5Lergi%$LP zj;Fy5*~M*_&fPWudY zA`c0jzn1W2Qr25FWXH^(BnXQZsOOfRTbd`}Cutrq?!!bMTVIOW{KqOZYzevg(X!>2z?mDwc2$qf@f?&VX?nXl<%!Vkp}Tq*T^v+7n6$U&@avI60Dh|kK{~4%KWmU zaSN|ZSWyfPR>aM(J8ug&?+?M8HJwV72y@txpN4lzMh|0uMILf`UFG^tYE9T-^C_#a ziCgrPn->Al>izc9Zu5KAB)h}|e0Ja?!Ff3)bs4u-lg?oF7nGa^H@_~NHZE%vVoapp zW6V)10FOLn;;U9M!glo|H8T6h7OJ^U4V;&LKjB4x)e44jO3)P|&`l8oZN?gjvK)p$ zOv(`(tS{C#1lu2+1=(sVOOs=D=Jr=#kSD`vy?;+K$WIaHneGtYX$5+>@8l0Tg~SG( zno^SGQAQi2MwfT&S|FTuTc4oa(rKM{Z7amTn74~V!=O>dFO6vSHE|9ZN^3b7yxq;tbmHVMVmx}H}Sr*N%mFFb?7=Yyp6wm@3`-c zcm8DT!QN}Fx#s-N&zx&-jkdMS);VRoP2k0m{9gL6jC#yGdG$${V~T@c^H7%IFs?zb zr6Skn6?)LA$Yb-9(AN8dgF?M};MGMN`IMxYQKn`c%4F((2mOYek&zo+{f5-$aq<5B1Sl7-p1pe2g%KGoxR2jx zTR0;$Xi1et5gQ$VW)FMSa^4DPxSvrU85!+YW$z|5 z-pDx=&PLGch>5Gzig~rpfqX}^dl#JF`X3*E**o9rkU9?M8qK1>&{fPWTv1Nom%ALd zJpW^-7E0hP5vQ0|w=rCt-P@h`VX(1ml(zSroq!pmTe>1j9G6s!h3$JI2R|MZqi+AW z_{rsX2P6Eu!QCtSd5SBetq!f9HCArv(+|HQ=A(@joF66(y(6P=)?}^i&pG0;jZl95 zt&XC$CjaBffSB44k|R3h_Gwm*$qQ9*(SZWVzRO~03%3RvtMc0p1>RFm(&e1o%6yak z?Z`^M_J9*r_?1e1ZgzvI8e8heD2dPf%ld)tklZNrhg7hU%u1WME&oPuzfZJLUi6#{ zmhzF@no0a>$H5QfwT@4pe$?=9#_brVU`26R&t!!6AZ8i(%h$$>zQ9zilD#yc;TMV; zN1;b1>+n~^;c=EUi9@0$@CoVh%rCyqYj)1Z>;VPQJmj&t!KPGV&4mktA|MK%A3A&g4?lX{X}UijW5?@W%y0e6Pa zay}`$&MB2LboJv&`Stq^!OFr{Mgm1_cO%@1NAg2avFco7)3FIhjzdj*9v-i8HnQvb z@K!I~$H)i9@MAWkqZ&k10Dg5VIM|SDq}%X(BimJG2F!KHJetEcy>%)#u>z_9*BIpp znJYDW&pe5_{21?de*fI~v4Su~EU?=n8>Z01be0h0hjF|40jt{%GRTo475sZXkU7!C92}Q=0~~GdG(JE*lzZ8pecEoN=9B zw5qv%lAZ%HbMk4MilS6k$Qa$JCo=2Z^iRv0k-$@OsJ@Qojq(7E!QeYjvmhOfDSN(G z!!6;GC|?X3FGhgqi;kUlxO9`gJ~|J)+M}iFT5FWoeaEeVT3hX>`(8oR=G#jzGryf^ zJR>tzq}2NIOBCDMkCN0?wY3K~7#n^eEFyi+=9D8r=#2-uy-yaDz20*+uI~&-J*R}~ z_l7`@FF1^@FJWRQB|>0f9%(-M@Y_l_0wq3my0juYT+d&VC zJ_zSGm+-<^<~)-YlNm+2hIpSK`Fz?c0k0ji?jUp&oA?KSS2K%nE_OLv7#&Awx+y}t z>>9nSav^@BOGPZwlU}=dOW{VYRUS0(|0-!5U3m6hEA^KCE?8R+*)%Z;O{FBQu65P@ zK6UWHCpDq*c)rWSvnVm5&~d-&YVom(G3-@WG3axWg_AJ&0}^Jr&B+#PQHu>0%bG`;SQ?Y*IIHS{c*S!Iw3?Q`Bn7jO>G7f>I0b zUzRlY2nlQ4Eg0Db-!5W?c-*fSd4LZ_`ER=%;6uDc_g)lg#&T=kVIpA{N=c822zSjA zm$WF8VIO@qOCQ@ALa+r%wCW7h@bQ>v$_8g5Rd@14@O^V{a770sJwLm3=#caaCE&4) zP|U(Lqo}ugM9F2f4iU~zFPWqLPPZG$@oq7BD4BY6Hl3~sv=+I;JV&D{3+GqGxD<wB(S0VALvkeh6Pe|3~Nz+tg;x8OrgB3;VUy=!YZSK*+N9s%-M zu4pMbcftB0aO^$7&1_5s zZ9v}xRfI8%PR)N!V1wA}|FH<~nGy?R&pvE;D3_`e@ZEgo=_wmCw~aHGmp(*Qa-ZMB zs-VkwG}S?1=4HRRF^hR^cz&C6%BbX##;~(L?{c5{8u5Debkj41K=(Fyt~R}zXxl*F z;{m%cMvqA=VNM1__kt1@X9kB;wW$~Fu7_1fnZxWRuZQCVEw zRXw)CRkA%cVfjFd9pc?YR9WdAUH|nr`J{P7yZunzc^0UBOwSs9gC0AY{bWAu2S(-- z&bxNQsmr$blhzKUTgCbsnP;(Czqmj2h;p8i@G`KwCgUj7COgTFBa=6#5Rt{NUk|h& zm)jX#oy>P(cINfOf6R`iBBo;LTU#?oi z)OU)-34PbO#FgL=EK5ll#_=Lv(Fg<7v%IUI%@pEE5tXE?teMrraeEvIk?k%zqlSn6 z?D?1-vXPPhR2imh#?#;XHG4hOX7jF;$Pm-WrsIcZou(21I~#btP(MLxwzg7u-hA}_ zb=Co~vEqxZi(UHQ*boNxfiHIChWV%OqQ4j7P^()`13U0BdKy->&xtWqluqGl{Nu5 zUZ2lIiW?L|{?!WcIe!jHw?3dx%9#|@lov5n}MBXmmg{N{X(|*R<`kJ zNf#yG)2s1ft(d`rIStU_i$oh(c)^>Pt$J&cy?s>4EiPHr6#SgRegPZZ5^`7ATYGum z3$0+cm68v2SfCw)nAs`}6(ZUK`n;em8{C!q%r}u3!d$&HN$DG`%(s}JI)8fy==$rS z*K1zirKm5UC)u0|tgO}-zv+mpg-&n!0OGIN_UQby+0wm$oJMR+1waBu4dWcG?9;&Q2ETvNI*B3A!V^Xk0$U23udL&UWTZ?ZH)@0~BcfyF=>Uq}rS{ zxfId~J|-)_8o|xU;{12s=_jLO_-j&bM5L1A%<=K}));VwqvaRb*o7nG-~T2_xfTnE zGe5%PaiGHyzy?vO>y4Rv^}DG-!AJG>kG)UGrG!SzU3v70Z`>4_zQv5Ai1^!wwvC^D z=fNSsP)>@a4Lys4v7D@^|J&e_cr4P|<6zFx4}hH1o05@my7@T81>@1t00Iz4<4t4ezC1CV2)% zn|vn3Y@vlcr>?Y|>YIwsu7fx~MqYkJP3tvd*G!B(&)+>CJ84W^+L+eK&7$)ok5*~> z_I^(q{y`6V^kv~qh2j9ijmTk6?d8pQn?b~m+NZ(7_|DSp*pt0;il@1Dd*#Z~qE2yB z9G^c1a%eZkrC8atCHGZL6|JIesWrR}F zE_Du$xfs0j7c15hbDd}CE}1;-w$x>(SJd+O&I8&@sy^=+eQ-0F zuZTe^M#0D9R9hrGSYpjWKswG!Q$o>vv8v!xn*UuxHppiCp>3sU(J}C z-mRiem%jRovxFunqVaD=n9Tcq=FQFo39KImAUO4d>yUC&Ex~cNb;c=ki>7E1i7PhVs)ds(dsl-8zpBYZ#L|qtc%2?r@(=KszoTJxN z0bqJQL_j45cyPlV*1q*r!$6=9RZ_$P+^k=U8ypLHg zY#p_$!T$tzEA~g#f{&x^kZ)uyQd~$|e~F)?{+8TocZ_0}qNXA4(Rdg-(PJ7PtD~u* z78gyy^?B#wy|W>07;k5>>Cp=MKg8S3<`R@YfKvBBmbWD{ryG`h#H6#UXyf&sb?QJD zRhwQgCDkom0A4486E(@DwP6NWF$y6u(nCyjH4NumeKD|5f@7$|-6qVlwH>N!= zTGLP3$1miQlk2^$E}W0a(|tZ1PgwDezO+FVr668ay^k`7sc!OR8JbmusW<`^DlzKHTMXeW@9`K?NVQd&|U|E5S9(8-)PWmot;*$O}|BOU1_})(h`z3p^68t8Z}Qhiql0P60L8dB##F7LOfqIMVdVk<=FZl z_LQn~rA*q~$hi10(VhR4Ym_lN)rv|!&wsd!Bt5iXhI8tfgYJMIn^My=@_;;zL^IZt zKQ-EwIt zY^t>TH~(twN(@936H?(+4>uVPO%0skR7|Iji!l!%Mq6plH&=v*U9-$ihO{E7Q;J)f z#}>W(2zP$`O#QajpIgxmch726hxTk;1Vt@vCg*ePqbfTU?F$9OA}%FjF7We{CdV~M zoi@I!g%!2mAR|PS%ElNsZlc+vf;rC*{hZLZ>up*xGb5r{`u`uoC>v!?va(5HR6C4t zu#1_(&mR;hVjmcsP8X^lrtQrUf#x1Z*|B*rj&Wt?I~lhp=vUk(M{I^Ha*G5j4|-=A zyI~Whl$r<;;NFCV$66QIPaqGKZB>L7UZIJ=eG)^oQu@ZVlKoDx9@Rz>a0CEYQXDex zjPH+=!QvEqUzz73bzCE;W1vFd;R;-X1P!3|D&y?^>b}US=am8y;`E%3IakTD{bmXG zNGvIf7Z?o)R}E{P`@hj)bRv8_*48k|F4W6ax|gixJJZ&zAO0;C`OhxaK=7mvDLMa} zHae=+lK;gGt(rN*Ym134mEGv5)$;5FbP57K3`ncl&74Np;mf;weY&1?!?bvBPX*^& z*$K_Ko7F=x@sjSflkSX{`>un_K21^cSy*c`dAgf1Ppu}l$gWuK7gdh&TKD>RK(tb} zKc0xN#sVgP*_yGjB|cxq?bNUIKMFZpo-&8azOf8Tm?Po&huq%Adutj__wqw((`x;^ z^G~ZaH{1@*Vmfdv(@AB{H4V4L)M(K`?H3nW69G=Kcb~It#S;F zl>fnbY!E%~H?)58{bXboYM(*?>q?s_KnAL48TIv>^K}UoCKiK9c~|>23HkX6y}h@$ zRn7|!+x=blgQKHKV>#I-Gn|%Rs)hcY->`$dbImHJXBZ>zN*Gw{-xTjnx9+LU&h8zR z)hp+mil|eF&#qEv91Gpz?83m)W-sKEm;0;{6~ReAI1WkX8U3(E6;sG*M5wy*#pwrG35dj7oA>9z`YCj6VjCa)B}w~HFTw%N@t z$iDa*W;KSI=!HbXdExRagpeYtu!xXufvj@(gH8-8IabG4oVUVLzCHG!iFgafE4 zI|jt=C*C3|gM{J_4?7Z(q9cT*e(0&cca)fQzpr1nHRr9jspYJw%?wm6HGmYe4~?qV186|m~GkB^5~ z#JI!_K{fJ$F=~h*Fq)zLp9+K5Z{rI4qR+!4KMW2zI$0p zRL_~VUtxhJ##LrxJL|%#8#rVkn)=|$bD=`&?d7g?->kA$8VBG?F$expWD&xSHqq|Y`F=pD{(R?HkN26m5FE&_CxTf~rhTK#J z?AxCXllZS%=62F6G)77aTl}&eQN<^b;OBkJ)~UIRv~XBJp85i#Mn35%D4wd1gm@rFtS~QxEXpt8%I^~f!?F(P3E|*HxFlbjwP?;YdSmRkZdq?kW{2#27 zGBvtnn39sDp|V*@m+1ZsweN6X`Ht$ne{Kf~LZ?n zGfJ;qL(o|E)2NO%0DX*(F_G+X4bk#jR%8Ii5k6~vfy@+~<=t3U^`Rc?81*4xE)OK7 zlgO*AZ5MTPY@rrkkbK0BoPJ0-o?kS08=fL4zcF=*F(9Rp{>Lx1G-wk{ zOqdKM&8$ohrI3m&CnP@CZo^BIK@5Y6JKl3Vmi82F#`Drr{BIlLslBhS%1nq^?-ih0 z^gmG`BuJWQWvd`TMV&^y%>A7$NY9;RV?of%SY)JJLK_Z-^@D(e`8tEHu!S$3EwbT@bIdG^%(f=s?m1tDB)O?sH51l$`bNffRU z=ot!!KNWpw-J}$_Ef~eY6{PVXL^7py=>U`e6b|#W^sf$DmU)8o%(se!U_7*YO<5{D z`Rfpc;XsISD6jwtIR9>k3JdRmnlD*&QJHz$wEI}7Uw2v*8}7&=(sWrmt7OZHiB$Bg@*PYrc+Z3bq7++m0Z;W9gLrj z4KkZWMMv_{Jl^yrds*em@1`w~2%ip$%waCjPJXEwSMmgD0QsNXdbgN|*PvnN0rToR zc-kBnI)%MwkHU(_@9-+I{3=KNq2Iuj@$sf)&m39De|#V$9Oc-CC%_GEEzv06J~6d& zkuyPC#O{{*#k`)_#n7hWkrnJ$i;g5#q^>2-ipfK_HApFF8=P{rv5X5ol?X}KLpD|W zH{!};?3kB?UGg36 zEG51&6j9xpt60estl*he?teQ8F!9*0pvjgr#c*-w7uH=Ve~RzB!o^SJUxl1%X1RwwUK{m?f*FT7wtxcH6~J)0ssE)S5!`q0h|Cps*QyfblSMqY0?ARy zuNj`I2>JTb{h}IFyT0rzaXu7YhG!h>l0Aw+pZW~8XQ}0B*qwFu7JM?zMpQ1*DRMOj z^O?e*KLD1m$yDa~m8J%VWs=al%Q(DFss7i2T7wjR%TuJkIAwKTX&}c#)fVjTbhonw z#aR32W9(80ew?~0)=DBIZv21G|3v4=e>K{x;lS;lBbgC&$y*jW%dRegw?Qk z)S%^`R8TDM&0dq>^yF$c@X_rl(vN#Dd!kKt{dOmVc-MpsG&O}cS#)7HAfowqdfO7H zULaLI*Ds2l``+7|znUC|y+8YlirYsR0;^D&6G1|{q^ZUmH_f%`?xYkQA0I#4o8^Z= zQjhMzWn0ZN=b2ikt-h11ASBb)f7aZ7q#H$IXJRq6c*kh7QA&@mPp9jwn#;*O0sVys z|ExOSUUql0y&vYu4!if3rVjJf4L*8gZ(=^V>@5=e;5}!m#ipAs?0q)!>8;?AA0-AM zmN$zla7V>&`eaC&m~s=wI1`k=v8p4+Oa@6x&>Ep8H3jaD&QX-}?#sj*8s^kMHtB&o zhrZ~b0-p0yl+QNL99#IN${^!ip+yB9t$cTe4tSY=5s-1EkpX?NE5BIebS@r6hI1w| zT>XH;x$osvo(juswc{@p6copUX+6YUb3>t(R<%~`TZ^g_^fU9@=&Pm2Ky{?B$FC^m z>o?yrt-my|Nm_!9wewPFwcY}F`4-N}JHkURUqd< z^h03C2;2j({FX!vB0E%r9D@hqxxKykBQJ1S{M7kD$0)>li7{D`mQ5D;@OZlU{ukUlj{$2YaeQCCuXPNB+)av$Q6mb?9 z+We0TFfZDt{cga|DUg3CF(prArTkHxN9y2cdcl_5G;RD~UE}sU@IpWxNvyAvSrEf@ zYOO(7w1|~WZ;0eZyM+oHK9`vNP-Al$hBWwVR-=0NMY&c*U2$Edyo z6Crgbo+K4iAD3VMTmRvw3L}WiS|fg!LantL%RSn42r%79O-IE~4n-AJz}{yYQ|`$D z#Rn4$b{6jw$I2FroR>I%vR;nV4~R5SaHIGTNfwT?u;qWYuKbXgmEIz=s zY)Jl+VVO0-e6%FRflcX?&ikE@zz(D0?q<}NRd9OGT}rwU+wQQNT(Aab@p7TqcVLMD z3)e=EB;kn|zW<}v=%hm=xg2?^I-bkez-r9*JNp`(ybgvy*&XXj3^g1~UL&GJ+#Vf7 z!#h;L^>@{Je5LJo-d{vKvLrm@YMMVj?#bn7=+G(RwB|7_seS^K*Q5y>P){v!8TIGXc<-=#RAV zSHMN2OKK3xuDeo_pmTls$i2VcJ zFQ`Ex&4@3gVXk5+9;BF6*a}{6 z=SJeD{pTq$3vE6uypb+_>8`XjSjI40zt+-Dd_i?-^mhX6bu=kt&&b1pvY_s4VaVO@6_j74>jz`whO$&7?i zOM;xA7GqO#FDDG~SuNErcwL{x>*?J^)4$O2YZh6%LSNK#KF#tPl-_?+%1DO9Lw%|znN^IRV<0UZ}Tia4P- za!S5(GqN#Xxjorm4~<-G^SVu^jovkT)LkX&xZ^k+y!6;06pjKLgET`#*M%ThNNJ7| z?k|e44slAK`MD)+?}v*%RF=9NRe|Z{8*h{+arD$t(tkHFOzC(U&6E%}AOxGQ^bV7u zda*iAc^1e-Kk;s9khs}lMlZ=wgT0MmhwqR2+Lic(UA7o@G;}W*+3MO&w;RvhyAJLK z7ewzYn~YuzsOZLI^8=<<^vr_WrM1zoN7L%ATeCgo%6CpD+jC> zOmBAnvg-&s!ZO?g&nO!4!*V*}0WcX{<3BD}gn`5j8Anb#K*p#lY1PjxTAwIX-Rtr5 zPMOWLpf}@KBwrDZOiR-Wz`TXeQDajGlvf!}{Tz8?3axyX`Q<18X;@pa?=QXt^Koad z$1Zfg?bqD{@4l{?$b{zwg66WkuNZB=ZgbbhOXSRArUhbyg%hb89AhYMCNKL`tnbr4 zn!<~Pa31pN`>i`|ME}wn-I7D2xTb#sf!Eg$R(b-@YRFBuPn$EWnr}6gyw2KLf0Td| zaIZ9I|Jj}j7N6|L{TDf|-HP~Y{M1Nk9cx&xPXE8xO!h1Xqjofd%FuDhKM5*CeK5YaC!f0v3k#1GV3mGH$uu2P>YU!$Xk#2P<@ z3s+pSd%v)Dj#sJDI6{b<(iYbWWob9&<#eLDJC>~4 z^P=tap+|q{cH_c1ym9Bb8R;ZT>5<#lw+y*bpJTlm&mptYPIoTL*G-M?7w}@C4)D4e z;MqEfR3zoZ8iNW>E2rYug&ru7#40k)@OILZ-a8}qoNf;GX&Y(|rmJppo?0SwaN7>* z_BSouAuF08^@`)WClr~hcTD%?aNUg2>@~A1cIm!%Cl;Gl)7QtuiC(u?#Jp?OxRTFt z_V_G^9l>z-dfiC+7eHL8#7l5e@Yt@h`XGolKzS%X9CeLj?z|fNBjDkvCU%-DeDR^d zy({#0ioSS93b9v0-mFUb)?vv5wT$f5LLBKuxbwN;^vU?I^jCo3m&2d1&fg^PfUM2e zvGdK|A9uR%T~#*U=RHI>M$roI@+9WD?YHqzdtdR$u(<4{CkBvYcYZ(IIYNBbZ8>#B zN;6uvdQ!ziATbwV!Rp3yYJj-afgaI&i;8RLj(vEy3YS$J&Og_R@py%m^kB95v?j5& z3RRiA;Pe?1cf=_Pbd}j0F9q}(ij1WBZfG-XNG~WMq5OxG< zxr4(DD$lo^u(-m9NkZx+N#ag_+i|6`&W{QJRlG#OA^$(3yYWb-Y`HdKx$NTNIxqMq z3zw4WZ=WJ=k4o=*yi=t6q}9LYyjq2qM^FADrS=}gGq9mLnG(GNKFE=|)BiFB$%js& zXb`U~?38YGz7D+9+oMB|^pC}-S*-ObG16#sxwzPC7!dX(U5Al;XJY(;PhvyZgiI_IfAa`Czm}S7N!FE zF^YQvBry!9?av)8ZwLdtSVbMyx6>iafW&GEfW{@ZKF8!jklQ7vds8fG$ z7>jo?W17G7afG=v?LOY*2&FbHZ7o_=Yu`Os24TrMVH3Vd%m)C0iV#>84BDj z|7zN6M)lr-}c;EmD};ac&^IbKZW(D7LQZoe)Vvu+SDzHV1X- zF*UuT%^f`bNj5Pt0kL78F=S?306R&LB}6>W9lU`10@KqCShF=2A9iE$5@5N~tn_bq zBVYyXZt?^kKEb~1e`B`_`u0l+#`)JMd8oJ}W7fB`eSNF%mM?)R;`O|_1H{NWJ&+hq z(bVdgxWc5D3T}$pP;saoJOPvv?G%UlhVF%e*cgr19kL{8YYu_L(8`7pwa4`Y5n}z$ z`9nYK0((b;X0wUyZolP`KEVRx0ujOx=O}S%dEQ5e3j?CX_rFQE9y5))c+o zMVE)}sUFAYe0%YzT}ucc0|M4jLTT-aV3Mzx_kB&+bYYykH_b)mGy>hSj`F1qK zzcuvLJM8^M90@*O>$*f54tBuV8I=qkw4BvRL2S=zxcSy(yuSXFgN=WlB=kf0V@vGT zrNzlIB@c%j4&$X}=utwB{6lBGtrSyO#m-7*7fx)&SsCcoK^@Q;2YYq!HF^aAYr*4c zQnh~7+AbfpJPlYJen=2Wmrg81^M~p`A@l3nv|oFeAyU{wd))d~>RPqunL`-O&U+1& zR&%0D4-IHF*q|80j|t8)+|5Q%$DM?>gEy+pw{8x$lkoipM0usreD3zrubsul!~z4! z$N!o^c=9l#Wk1L*rDAcL>-fF1b%}&_I?7iS4u^B)c;A=~_yfuu;a2 zEbB38hRSt3Q@u<`*k9VeJ{ogOte(3nIN!w=U@3eU`0?CfXSm5e2XHgh%~4ga03r5s zmZbiOKScp|ku<{#@6Zm&Vw8?lxwp?}R4^B1GM%(!OBo?DKbo7is86nQfS{9=mWH2` zAC?@!hO_+s+c%k>;vzvO)f6@VeN;q)K(A%JegA-R+cs(Fd%uuRr;VFeJX_}>CFd_i zMP$Y#rMr`V76~>zxD0?H-W8Dp2i~gNZIK5QoXccJ9BOf{TR*AaW;|hs`A+@G2jzHc zC7t*9ThX#C6u5srOU^u`C!xs ziHX#ro9oP>W~K#jX9@r}j*piVwr9(JoDnepCK!JoDEG{p(wr1VDRqCJQx0D0TVQRr zY%Y*EY*wUoIh5ndVQhANrSksc_kLuEi!P7@xT=6t1>Kz-#aGNN zM!rL^E9e6P&b`0gT0I4{SB#;$r(Vk(QHk(NBG^rf7K#4=4*K=}UL7s-=>u53-}qTg zgXOCF*uozuvzt?_+Ez*1!%KQ-3r-1Xc6?C(v*srx&4#Hx>U$fV1LD#B zJvSXHJIM!&=jYA-JhWi#MO97zl5W7G1HwjEOY}Oz7{rkq@Ii--V}ST;%hb8FrPoZ9 z-@HC^BX`QOndT?$k8MK$WHN1OpHwh)pSeG%PR_1Al(9kxH+Wyi50IMgc^}p)K@*>w zC0bCy>%X-GE$IvG$G=K}xEB4SlblxhmE3^vDzDzQzj`&OdYyvb6`FYVap1jj}b6u>PcD{ZfY=2$MWuV%bc!xq^n|%T;%f5$?`4$3Ex!L>w_o= zRL%wlCm_~rY%?9kp=2VLi?*51$#0nZOv%2r;twx#FF{g3B|&Pfr>7@Ld3lVYZnxw& zDf@_pxxEJO7#sPS-IJGQKIUgfh*%xCW^A-BJhgO~!haJbkyX|clj!yxcVe&iXII9$ zPwF*$>Wy9%jSkd_BYBcHEHIP|;4+El?zC3N!q18qd%-nw$F_DpnXN=FG9UtHun zKH#Ujimu)i=FME3vM=o!%C%EXexLxV6$8A82J_Xp#DO-{ktd@Og_lZ-eXn?NxgnlB zj4pj*Xy|T~bPTyCHC(Bd)Aeo(n?IV>N=X4RMB$OF2TZP(zrW7NsolmzJ7bRPbnRv0 z$XlTtAS4j82PFI~2?&jQ{>Nl6unr8Wa6i0fUD~^KyBUlS7L~SlsAqQ9^*XewnRlOi zVMf{{P(V38b2J8fSBY+;+`j3D;w|nPT~d8`dl@3P)J^=_Si)Ub#4v5VHO_w_4Mx|< z;%z%M1va~r^Ia1U}%OYJZ&L2Oj*9;S_I>7DN1PjjRL+G+2TO`>2CAL8tXfo z-y4_uePIfdo{f@=Q+Eq-iEB{V?fLA;>Mz7ozW&))gZdy*w3%akdZ7H`x_8-Au{AhC zajy~rw%z;CF@jInHFNepQ2^&oT2z$!HYzz#?i;SM|7=nhv_1R9wb-`oqEO<_DkgXD_@^=45i0Mz*WM!; z(6_+EDT(2e20ycr(kN)1Z8V@)AM3))7QOSxT8%rYl<&Hx`f&rXKa9ieT`>w%MK@Q^ zRXqkwz_OPNp9AJ3K~h(ISJJI^wvRLC6q6!i?{5s5t=?bW#Vka3j&7h<&alssr<@zB zSYBOJFZ0-)$S z;j@NS>Gy@*h@ScZ(Y=MGhlIt7967T0X(blKnmV7*n_ef0Hkn-#$Czw8h#ok5H5LK& zzs<9w{YoMmbeFy~{xDgY6EHUVtn*R#)YAU!?Lx7YV0MTf@0;A_fv3>|DPKVx zG+BURv%W*zNJ=pQsn;O%@nyGniJUOx^I@7=%3XI%J_PmrLqZBovW|vauCk(x1Pw3y z8G$MhFClN_Xhfh@$TZrDO#W_4p+IAYZv8|P@X?kt#VaK8ew*vogU~y zLz$D!cG@l*E+0OM0?7C1S*5;+f1XQrcA6$DeN59u59$>}cd_a;a_oQ{cM_uEQ__0H zWX0fwG%RhJAXuhqIQCakyTHrkra#eQDD9IjX4Bghv`?~6kt!`y2GUG?YewyE?OhSG zB{dKyvpM?vmz4bGwdD{wcT=HD&2J*6_(yqk`2{szH=f5eOEz~00gX4wqny6jE&J=K zE6$d4P{$E(Zf5z>c^qW%vJy4{IvqoRb+JevMH6yxUHcXQ6eMm z{wJYrm^(FQ{U`C2@l^Pz>1t1}fu-AO6b1cIvOV>@IZ9S)Qr{YBufYW2Lc;s~r z_h^CQyt*rYnCQ=%}hh6*0EzJ89ZnQv|+}9Py&|tYMAQyiAU4jCtFOD%B;i2PtJvU=7 zh$yAYIGO%pPLYN9UMxACm3Gfd4=Wu}!E_xg*^sL7sIZ@NZm^_n=nbceHO`-{v6!l= zPyzX}H>kv;h*UQ0jj~Z{=sE&54yrfmwR4*a~*tW>c6sVJS!}m+MwaXY4Ylq^a0gW~(Iai>Yq>ane|zqIcqZrms;)L|WBzU}6I6D};MJ zi&en#a-ucKJLAhVAWpOWT65a_Hj{0YO^c@B^Er|=VGl3oLm`R^54UJo`f}e<70JE1 z5I0r`i3cfeW7t+ypxMjgBsBE=;$nWclW~qKlV{}3Gj#yeuy*UkgdyN)25&llk?iz>eN1VK3<;G~{qx+}L{T>KfL_Yv^Qy;p=c~y+t9XotKwwNgUDtt__2MQWg zi<8Hl!xp|dS3oxKV$mApE7$RJOZGnt7*V1oKlV15W-z^S)}P%~)O#W}QT?PEmrsOi zS3I?05^Juz22ZHQOgjCbZrYjO?!LzBF2-g$5^#?=)YCTGf4wsONgABs%bFp9JCi+J zs4ive;uR)ZNLAyo+F0<{fO=S&piaVYiX|w*uDhucVK!GVVodm}^QH`gV`fi>YT?SJ$ z(#iQ@!G*Uq(6R2z6Q)rA(AjgxcS7y-w-%pjHO*DIcHt}uW;I~?2Vm7Y&*%Q(!%`;t z+ET+<)R8lMU5f)!O2}uJQ0EnPwq8QsYWtG67k``?Aj;vGQI52N66iOrezZJf2bA*D zj812-m3$RC0(9}Q!fIBcUKl>|E~Lt@u#U0HU?J{?T7tI>+nMZ5_d}>xEUt3{k||gA zp+gwa+O?BSA2TR6K}|!eUe^*`h!%XC23J6)HaPFUWfeni3jFv-nht|0*P}Ot4OkIE z{$6vDjk(XVN8E`g%NF0C59y_>h>N?DS8v)(m+$7J&V-J%*;L+IhyFPT$e9sixeOkD z<)X2X9%FJWG^p2k?;K8wYrA&d%jwZ!96(G}_CBZxIa*+Vf;XG?Dr&8+kvk^{c1_wNpOP(WVJdm8l{~OP2uH$4ASU<>r9w&A^t{ zFVrHGt#D!`^mG*ySZo950|vryx(cj7!KJ@<)Kc7`9_V3(!JEqZ1(3CC85e(Zgj_2A z;(JKb6mJA^{#0-ke6^LOCE=l_ot=kv%X-WL$W0b~TYVaWx(()2fW7vmj9k#`X(*FIQ>`ZUxxi%2J^y$?mv}2t%nz51&{*EwJSus1Bz3<)b;RAp#oIvUi zeib>4L!7YrBf2*5pNwbO0D_?znp8+yWA+?TEEo*J;ZflD=dXko&j3KtvD{EAbO{2A z2(cZs59;^GGEH&s?6jOvI9JNe;kgv(lNAG-KO1c&H19L;_GU(Yqm*fD`mM(OCKrhUmag< zzCiE2$-iu7K9zrP+B_(O&UlwH3}CoR&wr~#A5W}hRX@^a>1vKCx?Q>+pq%|bOucnf zl->J1jDd&7~bIuie?~6d%zR}L_O+(eg(0A3(Q)VegS{QMT-62-h&nN{LYFZd2 zth!pv+65BTB!eeM57Ndq&JBIm~t?<&~XDu=^L!t3Mt`0G#6zhF#U%1?P70a3*s(-qwBK#G3uw(e&!-1 zi{zJ;6}1zAXJBZQah~mQQ$`dV?x`0aDXt!5Io)#md7V3$XJjEOItWE0%Jkbr-Di|e z*BJL2&9{tm<1hRu-VT}~AELib%3v$ZNX{P&@i2gspM1$G`(2RoSEyUy+#n5Q8rdTa zOQu_iAEt~&EV#8`5sKmcF(g8#ki$)BcJA-fh+&po_VGJNKgt$D0H@@y@;q_Xm(cjIZy$x zH^L5i!A%eSi!Z`1efbWs=xm*}%mEUFaoQoKohe(A7h*f>9}H`qb@0yEUD5enS4gJf zaVSAf#?^J37pgyO=12Ps{@8}&o|9e|Wf<>U!dTSvaizhxDtmo%H9Ma#Ry?IHJ*F0c zXcV_Z26;NYblp+FB@1MzYiM@tVvj* zh7vf_ZoiKp>2@lhvVI?~a6XwG0?Wcb*ZNTNWCgvhFPr_|^=ML5!~GI=AOw~nsY#&Ha0Y0Nw4*PHm zr!WN>Jc#n&Kglqb*k)Vi>|*|!?C6%^^nJ0+DkVx2nb~^vlF<2m2bnsj3beAw zDrgwmvi1}I=DMu|$h}^SfA8~*Zk#DwFb5e_G0f4Y6lBp(@(5^|ja#Pdesr2|TD{3{ zU4R=mw@DveWlx(M!XZqxsYLZygEbPm350= zfLm}TJgE+F@7^&;a^33!@kpMV?_ZRI_+)nSa;8S|vq#4a9c~ z?q&HHDGzS%oAc?r@-Bf-cQL%L-)MHNF>LOEZkcJ`HW2ep{-#98Y+Z0U-`K|E<9~IQ z?`ow=BQ##f!KIdg}JW0eQZzRtlv9i~DLd3@eH# znir~)R!{hhrE8+Rg4Bda|JDEOi3J;%X-Z!l2y)BM@vvFX^x*G9n)Xn11Uz0YVI{H4 z+JSXUD%m;`zWN6&g-w^|m1Lj{J#`SsHFM?|z0n+xe}A1D`I!&xebC+Y2J{S0GbH{A zlrj9F`%UZNLvphCQz2%vrmw>-+DMt})?_ahP#uU{%G0suEdMJnE?g4x+c>vCO^`G4 zamJhkx?AOUP~tMEFo7jk0`o(SUDifrMdDCpF!HX75GkSBD>GW`BiEQwGs7OavE$gR zEcd#fREcopPZ$DXFVj|HyvqRdJTe<4HxP7B{+41xX&(cu^L#BG*#9gyBKyyUK*2kt z=C6h13d0@ zMMpcTcUGBvbxV1?w&gcHz>&D49=a=VE|-k=J6U}W)$Whh94rPDhakJE4m(7|CZ%t99-n4y_8TP9Fl(l zO(j8kBJP}=y~u6J|6oa@>by7qYbhZ0s0)*zUEoAdE9`{&WO1usPk8+#iFI1*AyC4* z3!KoLD))Sb)c6+E3*ULMNT3DGXl3#mBb?Dqn1~)L%}L0XQez{%*d%hsrjXvOqth}yvmRH2a;!7b4$C6bLejjn~{d`Ncy}QPhcVqg`87 z!gqS9bDa2#_{5Zimk$=-f`&o%{R0n}|1Bw=frJX9-VUMfHbs5&@u=&BP`CKm&h3R6d6 zLc7_$nCs+h$+}*wUbm>V&5q`qhu@s_?(^=?YnW&u-q*Ra`PItRJj~RL*o4xgc)nvy zEZ}{52s@~OiK=k7z~!aqQb1#cZNy>46?&1#bVFSG%>&`t@(qTcxO(YY)0y6k{1%J3 zh)})IBHUf_U*pPKlCl00qGV8 zCS=->lJs+7q{>tGF`?|%07)Ngrx2JuTsfq?f&BNOgsH5zw(<2y3NcXoUho*p(m%UV z?c)JXzYI*)1$hlMH&S<*)X?D(Qj{tRok@wVnSBJP6?G5CyHX0qsmvTznJhF(Joe>P z{#3^E+WjbA^J|waetd*>~vgRCqunIc}43z2@ccQp!F%k18$ zG6aHVv!Ka9uQ2e2CSPz_^BH7xa5{g{p->4y8P{eUC{$D9^G=v)pa~7$3j2L*JIoEc zImWc7sZDX7Lq+$Yn~mS z$Ndi|<3(NMBxU@npVt)BJVe?KRlZaiHgiEWPaaX=Mah9kB!hW7BL#Vv-5SD!P{X&Y z3Jvp(uhN9*0_OMKYK6Iga6Sb6b^hpJ+HuLaVq_B7k5`V(Gqn$wDC2Kd!alZ(P%g2> zACL|2Sl4A)9awa|kd?)M%OzYB`0%72=Y>#@uKXeJc_@rq_$tFNoK}%oNJ-$L?qL2A zd5x$wpVBKoC=Ac~P|#vgKyctdth*!`+BCu;wvZs%^e=D&fFJ%Xmcf9xbH{*Z*&g6& z?bBbK;;;k2RsuK(Wl}IoDh@sgfZOtKP%nz0j4djeRFM5+srOFJ$I6(SYkk4L5B&bl zyg+LnMl-^vu_{QQKy#<-&7kS*2*sHfJCfbnG~=GN$G@azO9;pY{X7u@ggYfd}j2(!}hNdFNr_b510?3@^BuP|{98=TG5ZtPO|dmE`T19Q}nU zBEzYH14wz0O>`TPE_zj28zITGupW|K_FLpGtlLb%Qg$F>;EqgL*JZ~TxmLA=-BU?J zkbv`X1d!%L*lr^vrETXVfjnzfgYmIbv$0r$ZF0-p4u_JTM6nTvrUcG-QltwO!Y)DuJR_gJ2#v$JO3lQOi>_YwK&o z?MG?4CK-T*+;t!VXbr|+6C&_oW?WsgiqE&lpP%pXG3l+W2EA~|RZO5YU~^AaT3ieT zAh9~eAG<=KZl#70HnXZ9*L>vNUk8`rD zG$7kiE@5a*cXx$uzIsp*HDBj7sk#P0KUFDJ;VR5IEM`4j(;pxI@Km5<`0872YMq1S zm69#D4y_FEMuQ5^pv6U^$9CPSJ-Do_tb^@KC8T;^)?fTk{QuZyKrGZnB0w~|`NXqz z4gYN~tCj=s6C2Q*e)ZzF+yc`AygFV|VBX7NPWXtFTZl$kR$$_(4isSj=pl-x?TYV& z4ho)&2`?B_m0a$K=35>by!E<$AIT_Mj7;aO+jpFUM!8ATy&e24rHR0#1ySBPP~v*7 z^|A1^#_&@g%9Hx?E_+j1`mbAC?x}4T>+ibHBTP-hq(-YH=DKfj@L_<^9B=qpmu+{D zu1Mu+kpQp57;0~1lu#RNKU+R-Ry^`hO(t@+X=CY+!)TF~{nGwuokK#9T>Ahq>3@cM zpfKhQ{VV>_WnOCMUfo|Pz4SX{XXmL%AZiF@Q_};;CrLY&0%4=tj9plA`5{9Knn22! zByT>!1UU~^y&E7r)_E#@aZ)et+|a|A!77+vSiT5%MZy$gNa?0|^FRc;Qq}B?St~TE z?Zf21sZ0#IvVyuIs z4GR`?JHGKAYW9~|p)SOrU7)<>E3q+KQp+6ngdAWN{~G{_j&~@?XG&~gc>5ASW}?pf z`TxK*|$=SJ_x7@EzIp7Ju`p`M?nR`q5TE14fb^AFb17%?wIv7p3PO9L3lFhW9 z3ZI!_%8PX45l0#(PNe)*yY)L%Ui-VQNU=|fZQ+>v?l>RzG5`qWIC*B)b#mFMQ{;$N zXrUa^K53GBi?!7NdLv}&zYl?T(}fU7J)(alCCyf7*K@klN5;yTZ!s>_7!M6EBSR7} zY!zvZ-ZFbT_~F5G9T*Dr*l^43?3{KZDp7H%bt$m`LTYY4PPQ`3aostk zX(pSzwB?zBV03lbL_hg60Er%v&8}P6=H5{qs-kbwH&c7kY;6DM-Nn(@CmHqe$ii+s zDjJxX)~1(AUe+&LG-R73#*`xr3LJf5MI;G~<}vxE92~@2QeG+l-~Kp=Im?%9ngmz_ zAIDY;I4X13L!06{Ryb4LoLw%t8{fqJGNa9MUkS~e!hcXq&#w~<022=eVjuJb6zH|( zp{*YPw;8Q8!}v>2f2DuDhgSxE&6}fF2|Gz(&gmN8i2CZH{}wo_ zTrl01vO`J_sLRg0=n=94Y1GJJL6H~F6la#fGxnQ!d`zgk26<(#+gTJV6JPLQ&stJ8 z-0obRT8>r8D++3VFi3(vL*26=u?y0RK4gh_T%9xun9-*|pPUzO+KH%nEeqSe{Jr=s zpcrl5qe?z^R^u>c`9=@IEY0pE1^Nf2Ci=ET3k&2D)Ye=CdY{RFdjLqQL~|bAl_}{i!P6!{4sSNOCx-puj80!mAo9!M`nfv&6=WYaewWWIaRpCAki7D~ zH}Uuq9-)vfaHi~^?O7oo?m>vTUmU)sEx(Ir^r2<+>%8C2%0HK9srz-e~%Z)n_?nJI(yZ$_EVKU&y-^`2Li9DYvsLcS;;6! zW0Up?VqOR<5(Ef$SrbpT@rlHS<;SwJ9Mi=s6*!`mDf+cX*aBB+ax4cbLkNM2UBr87+=EVS(2pJskRYo9!&!93%uV7@+Z-{Z_n<>DsDhTHTP@}*L;Y%fAW8D zUrRpLo&vVvF~tVXm`zdz^xw)Ak}ak~|N8G_J#$YkR{C{^_Q+)8SG84%U9or_>E5*L zwP^Z}aG;%)DoA|%$2|kLrre`O*~MCvymSgum2{>CirR2y@5yT>{C2@E{02B{5Cpus zdtWXM30#KOzpfs=bG4F|_Q(c!fpQe;r7`5ANxZ}$?S?HOCTKRkMw5sh7 z(lr06_38EhU7a$4g7h|92GtpLPX)L(M@p8&&)1G5RR3_j1{?IRMy5f@G(P9;6p$uS<|GmMcoN}R1 z=+sbTTX#348@67985zgWG~p?0IBuQBkH@W6rc#iQ8#NI(9473ezB;toUSo9e(k0f z$#nh4RN?-qY;08Ff9^8g-RR5d@foPeS)=PlfN#4?a1il9P5l3=DbWwBJD(u+g^!F=Ixmk)ztVf^2$0Ey>l4M6^&I8TeD==^Kw4}4CWmh;1$ zck4fFhSq;rkN~IL*G7P@5;RrypnmQ~Owwm3g3kCNZ2pe5SfPJzShTLmKjEnWN!lxW zC7`A_bCXh1)>4Y7Vh`@0)9mHu$9KwNpBAS4pF_K)xTA}~)xq{WITb}^^krvWpn%66 zxQ&NBF!0v{#4eQ9h}Iuk#3noSN;X&*ws_xD`j~zf|71HC^0`BAlSCxIh4dHU@J1o+ z5iN%z0BF~{PA8{%E*X-@QwS~YKWTnw6J|gzxCA*S|L^sT6Q0y~5AXZ2t8~Fd_P>l_ zCxcyrUBar~;U4Gnhc7q28G%Oq)BqR|h42HPH?p-~Gt9bt-VmC^7`Qr^UC1y0a*B7C zSn862%ihWP$wDbt`Sgj&sX9a~^!`$&k5jiEM@y;!%Oa2gKOf57b>x z8-Cul-V6$w-I#snDFnlJuPd4wlJ9C>A{Q*EeJoI5AX)Z}fnUN^P`rfe?gll$#A+3h z(|%j3Xr^se$Y28%Y!ku^ibyf{iZtAp7DCV?MG{BWqTnLX! zOMMQZ^lb`k5{gU*jfPBt8EPlM)*eB&h>V1~rZ7uobKg5MAonb@{UQO@1Ohy&fpvNx zkNs(7Z#)LufUA7eF|z+oAm#2)k|_eg58diXsi*L+@DvkIyS;!&{jQ+hB+Z~)tm(4v z)N$9Qxkb+@J1L$)%`lIyyX};M_p=-W`HAo#Te6`wits6>2Prcthg>UP{DLE6s6-jq z!f&mJ{t2?6#n0@-e6yejzIBAB^h*j^#Q?Kp$>QUm5jWMO9tCP^rfAkqjDy2CcvbF{ zv<>eR(&^1Nt;mN#|Ff{q8D1i(a$}@cvI8|P@f0;wTS8ux&tZ5H8m(DAZ?cPI$yaXs zl}&y}-yzM=HG!S?1i|o!)x8>5+6OYIfnRYys#X}}seDlVi0V!tODG3a&s1NX0(wz{ z2izNbhPn^`PlrYw5O-A}gvlc749rN>!z%uY7p?@bgCF9xtW1Q+l`2Y{SNnPYZIpeE zN=SzO9k?UNWzxjvHA5CGJ&;6@K?=OgUvJwp*NN`P!g}ujEtp z=cfT^!M}mXgjNp)BkkAO2+rg8qctN)fk@*o1&qbadW)RsKq*ey1QP|tqq@y&ZkC0+ z7gQB`6uzE3^>+Y}e|@fF!jRHG5G0My9o>`k^nV_*_2(3mdg|ZmQV}ybKv@CrFA-2! z^^^#fCQ4(mYPly#^jY@sCyMjD`uEqF4pJ*$$=^dkyrg(44`6v_=?d2l_~ZZQfZ(Ov zhTrs!0oC7Uk5ayM1|#nR(d!f7Tf_aX7p(U(2{YIX&$V2wQwhNL@KQ)jIK8#=Z&{ke zZn8^-CgT4iZUbZZczCbS3Qs28+V%(a-*6UK7hZ$4poQAQgHV3OK~4bO?dU^^m-5U+ zjeVKsn7rm+EBgQb5cm>E`GpZNnMX4OFh_&@CP3p$l*Iox?SE4T4L^2J0oYpEMJ-%e z3jZ`h{(nO-h~?FXT5(P>!FPPj14V=Qc-F73bO)Qtr=ESA5e-jhgxmWY8@O2RJzsZj zOMC>&W^k&u=n9hQQ-Sr9-*`Kf=cYUyJ5nSo2yQ( zfH?|pUBG#SlQy?Sq+{y%O9JQuTzZ{PvW`_S;CF96qW+sBb9G0(If^KMbWDQ8graYM zLZxffn`C{)vaCZ)`X{537Wc!_$mtLMn4s}g6(eUE?}tN4nBX@yOKX`+Y%$k%{=V`j zlR?dVO`VgCmNCNn7m1T7p)aPzIKCMRp%2TqE$-ELXiqj~Uu)L0#o!^pnMKM0uNzjw zgMX=U2=KIc)Jrr5Uw6}kuhAqHBr&Yl1qkRKt!ijTCsZ?8X9rr}E2H<0xiS_PR;t{vPfG+XT;WFXMP>NRNS$G+CNa)Qfs4vtiVhcFX(LfNNixz0twbM0f>x3Np$ zJW&zLe8jhC^)UDKloL&z*y@}O#@OuaSR#>P0>6@{qBviC+LSM?WOd`d*t+|g=2frk za$`n*UwdyN4e3ntD%_7xOD6Nr7%oXpZBo8j){Bhi6~9;B{PTJ$kJo7K2x1r^TK-cvoEUi~d6Ve{l>WC54+UH7_tx5iKxij4l(vk$ea zjr!Wb${V!}%e$53eK3XFcv?cn6m`uq_-;tgPM0=oL1A*eNFevpUEqz?&ZKB!k5$ns z^Qn+4W(yM7t$!SSgBq{_r^eyVLjwNvc1@TdC~OpVNl&5eykq9hP$?yY?GSE%r8aet z{feEaLVrcMQFTUi)csw`!_tY(8iaAqUryY&h`XyY0yp~C^7Q3uv| zZyx`g74)x@{@}<$VBt7P#M1KHY9j>Hv-kvyY)3Vl_%59f-<#8A4>kAxljQij2bF#= z?O`I8U2t^%g8Hb?t!k=`#iU5$USL=*HezSyasMid!<)2Y<@)+R^;ibByT5-jgFgqd z52#SnmJYguV$ZzRLXzyRgepM=4A%GgRH8-LH=DN2zxi@y)u!8lSU(u+tvLf=*N^j~ z##>hXRcx-QU*4Xl?Acr-jnloS&XKs+&~V!t;zn^%&9wi%to4>AsTNHSv*N_fwu(~< zr5-PQg(0f*EkX5$-lHyfhbtJvz-)R}i1 zwHSb=0-p{&9&t>+5>*jXujK`az7-b*Ei61vuuz2-2tCw*y|cqGwUkTBXzH8%Oft*H zPzWPxTRwL0f4*f<%3;_4-7WuCO4?65UR}Cp+4P^O`1#p&Z3WJ6Sxh;*#=?AWQL|h+ z!8rzY8`N90*2A7XxDTtQ zP46Jn>EjMESdWXe#-BJG?XVy`4hI_AAxuo>&S~&u{tidk{QPGDz!=6(Px9m~-RLj! z^k$G*>||rLUNwaC*{XW9&n?Cf8OyXk|8qzWX^jSgf`2ZMJ89ivQp%pgWAHlofIzT^+#f%0Y?T$nZ8mzd!CYW%KaTx zp;+2jie}8HmCBDL?>0g;s~Jh&;F7}6YXul2wb*NZm$znHM^EPh42#*_JOUin-@|uc zEzq-VRbfKbr*+uUs=^0M3+B61l;BnsFVdHu_;&PoVgG$IbWo6wN-IuD= zGLYG=V|RQ#9nK`ywk9s1B;wiDFDRMT&Uljqhc z_I*AcVW(W`7>n06Tf%jb#3PZUl7SPRL@f>EIdm!RBbw_}TXP}x9)0-oyY@VPdC~)>; zuYvm-W}jf#Gw9>E*>ugi&a9eP9^jbRIJLW4z&$wsSubOKA-k**Ci(d6H}^4>6Qk3w zo0V@W;ePXUeb@U{YEA5z&9=hr#gpG;FxcYOzEgUVm`L@hTdntZljjpAXoE?Q#>w;x zKkftr&E;NS_qtiyaTgc@qj`iYT>!Kv@lBz}Lkdwj<`BcIRjx?H0?Vb#d?5pSZ_@#ru4e>qWSLh{1|MQ{8 zhL?=K$eFCZ;%`{0t1&Pii+|zRP^8;cv+X%Na_0;7s+Oo(%AQ-5p`xr0Ty@ZWwSS#^ z#1Mu%jMk@NOgy(eeyTiPuSm0EbcQpBd@j2>kj6v?WHa*yRx3`zuZ}unYi#}yH~F>` zG5Rqm0M+Y-l>!WXN zmsL6rR#9XoVeP@fxKs24zuJ!lF}t(4nQtOszZa?tDe?8Ge)cL?{OcuYp-`~rY~t2+ zb@kcl`q&nzXl^2!U_edBen&}hdun$8=Z7qFAziLc$BF%N-;9d41oTMm4IyHE=);8@ z#A>D%o}|Vb3{1B4rd{pCS9}okwh1)a{jFeGAyHc6w>cN@Ih|~PTogRv4mBKBL@Eu< zpjOR=ds;WI^+)<6KbIS?8jRbCU~^o>^muii2${yC@vDMVNV`)Ljuj|?cEtf{X|p=k zOT4VKofjvko;*5YaV}U4okiVVgZa=^jO3$*)S;)B=V_!*7q^n8-|MV|4)wd<`f=06 z$Pjy$8g{~P(Mw-#VuCH7@c4*c>2;1N8}WnCBOyvNU$!JW`)M`Yr4f>0@Kw9au-zbS z?7>j@K3u?RlBwS@ejK9ex6_vDPoL*v!L^Hf>h5=r#4mSe^m6V?x${LZ>Ew6-Wr3k& zGxo3R5#coNw)76ym=}-q%Y-X)=+%)pm^4r6Rwyl~O3)PvYrIiJo|YeWCY>~GW2;L$u4H$QI&^cd zRQhcgXJG;EG@`&LRGyaBz;wiuv4~5ItBW(mdR0- z`iqW#KXjnkh=`~P$}12tPfuCx4&-e@NlrgF%Y9awe$6{IIkh{aEJ(*>!WrphTh#JN zWu_j#k++LRZH+maIu}p4D3UaVn98aSk*Ssj2e_@fbl-74GHU(}P0o zm9SnXQ&+izoL3l(f{F0pVp><>v>Uo2!~&mL=_@z4pZJVX0u8;CfO$a=tz^#VbY?C{ zXkeFylsMHvfaMc==t8B`kY>VT(SW+C)NoD?`%Tb`m8oz&%5}=({g$=rnsa#F`Qunu zA#+slO1^;nx?N0Si-7BN+R?P1qKdwAO3^Dsd@{O}%_pG4zFBm2E`{GP#d~W0fJW2= zzIBQkVxXsO)}%U-bNuCl3+M~)0|o=IbW>pyWx+@$7fiEF% z6>+lW^$mdhPBO*gFKWe&tB!eBZ(5*Rr;3Xe0ncIZt*Gu1$+}jTr7}q0yvKY`#r-k! z70oFy)^tvKUkf$0<<5iF!-;!H>g(Z#1ji?3ufuL_77-9zjxr zn#yf@2*fS{n<4GLzkpud+o-x3QXGO9Ia_Z#XEJ@dl7cvIftMtyDbM!5W9g5*rTuxY z4`a0(tA{YUdoBanc2<^E_={JZbLn%KcCxcJJ-+2#fSPU)Cpaj4^94OgFV=KjYeVE2 zWAFoV5Q(L7oM}D4g=jCnjftKWu=*oaUx`VFfR7egE+fHfZJEb^wNKH1x?8dNNuBS@ z+e*ghFOL!kSWsIG$(TQ4-V2$!`I7Tj&7qZ~H$8j4c9XjPDf;Fu;}ZQ=DVf~WQgn|G zqj@|uObE;D2EOitB|MKq#vP>Eg7fA=qE&0oVvu^ukc!gRoTA@q4FFf4l&071Vx8# zUriY#rn#wKI1b+-V%>B@QXRB${@E(awp;fj2ZKyZeF6TM^lfmB37Y&10vX`S}m^G?gj_G~DtSAfUpMV-P<5wGU{$Y!*?kHc&(_}F(RH`7y3 zfOTUrnalm&={n1N(t@`KN|!*|-~7^obN>coH`mg-EOz$xvKv)x!Jql5&ox|_=ez&M z{Jo5}8&+^YoV4BzyKcFjA2-bEI9`~xD2 zlk(Yskc=B^*nxAOW}FbesBIzk6Ft7?9yE{ExMU{P!!P;ubCy>7H*zSeEyQto#~?nc zxIVYj@{`wvsJZKW>K<8~3Pe#Sd@gmu6S!W-=z#0>XDq7r^J(TI>4mJZ)HsH+YZR9@ z(21r(sv8iP&A&{}DR}SyExB*8{Hb2^>8^ad7L!=pBJOv=IAX0?iKTi!X$Z3y?LFW~YAMZAIt5|J(!|on3ur z6&Y-lv_znoEEI+4afnr_NOADtRzRAseJaX46Xk_l9mzG}Vj~jn!g8+%^IEq#4J9&@ zhVHBG`zN7sDIxrc9uSUz^9_LH&qb1Y>UgUxwRt(ex7hQ>?<|d&xHX2u*{>q&#&}=* z`>)f!w->Iv28Zq7a?d<$0Qr?GIuAd-%;;4pd{>SIk+hUhf2}mNnAX;c_y|hV{I)jz zBX7_tXA3Uk1{d$9qH5w(0dEg8Bq>4YBa|@5+X8Hb&$^o>5jY@1Ed3y|*zjIC6`$$&w1mOr8h& zMt^Wz9E+YVD*6&Vt~0H zjl}r7+ff=Z-0N1%BEIruN*?P_?uhE|f?LLn{9Ui5*hQpA9+Rausr*7`A7Tp3l(+7V z$t?(U<43N3&?1o^bOkFDp3EovR-5%$kQ%Odm^HtA{nF1?sol2PedR8%2n>`zl>CuC z9q77YHqMzDN8yf1+X3`{q5yEG+eGOIxT(LTx}2? z+F^r2eH#AL2JK-sMMb$|Rxu2TS;`3GN6gpPb@v9aU4;8xR`1%B$ST@i3zhr5egED> zXpV;2xVZ4d@j6Tud$Qa`c3$@G`lUFrI5=Gv@=>Y{vCueZ1)xDEcz*U`LJE@XRP*UlVfNUB(#nxaKU zo-z8B^2*$=dV9amlXix!^q>2DuOmySTGIugtXFosK58FQ$Z)6*f<2==WrL$npGY_9 zjof?UVxsh~OMAK*RUIL_XMH^kxEMpekF#~#0$VXQ7cm5FUXW3aB}GCJ`0+tx6Dx+Q zYjM1psBEFm7%5AJJ4FQ9jr~mWmLzeP+FBY^dXO2NQliXbLJ6yJ`M@e^*Wi*w4PsEK z^)4BWP7q)MZzQ%ziJP{JlsRX_85@x63Uj{jcwfn1Pbk*pdU%JA$X%e`?`ZA&YZmfb zar<7}(kEviTyX3@jmf~}->({f_WBKp_Z+D+5BkI2EOhu%ema+A#F&X+3=t(#apJJW z=Z?${yaK=$mzDLqT1yU>J{Fn&JQ5P4lEWmWfMM#$u`DR>t==swbh&0>dhMqwtFHny z*{~29Bt>t~;M9<5O1OlV)xlc+zhmyTF_*QRtX3cqejS4HMH{+7FI>?XA_8u~lDotZ2pK z<+~7MSIsJ&)+Cw5TakNeF}4TJxf)Xx)r{`jZ^QJ?xZX@&4QJ1);uB3DA~h|`NYXKj z;xTG#;M%Qbf2LJ6cu$e|QX+tpggi0lHrKo9seLh*!iz;mwd+@B8crRc<9UZ>um4t! z+x+1vJunIw$nA^SSGyWGznWUeX#9Be%;r_yz_7H`#fB4afq%K=<}TKUsSeHHuYY!~ zc#}bk@b#hht%x3@i@dZL^e)N!`Vi1$H&?k=~NSI{_KE2FOO zotoHM3H$mY>pe4@eRi);KP3u(kek5nb(!+!z40n|>lkRzw^(DYV80gLIjZyiDr}1( z^JRmg*WgOfyz?aCnb*|>tk`=;Wy>9gXa~!d*tErCy5sM zLKgUa=63W#E-0>y0kPW%PqXU0G{LSG?h|%?)~OOT4Q}h@kg+^P56S zkvGnnqon&{FE-a%PuiY3Hu$HkGQ!WjCKJ>gym$kOtCn68!aH8!VlOgV=sj!e9=rdl>FgsDCAQ#oI zvwqws`faA`^P=HV(_b)^Uu26SRs zI}=$|k~K-gv}OT|n_=2sb=PTBpPplXfZ&Yo((EfvfmPF7bU5Gl$S5yXNzeKtb@kY@ z8fH%=pHVi()Rs7Q8wHn(_%}|hx5Elr5(dTgWc~K`-!x_aQS?mgySP= zH2fTW?aV7Fb>w0Z*6cB*YWPFAOmMj|5cZfnSzg+nb7VpIh-LY8r1Jh8RCU)Igy!j& zkM*C)G})^9xYtz@-(#Sw;u3?FlgTBP6CH+hY2QA@rU(slm<~hJrkD9(4c$rWxNJIf=dCNY%Sm(U zy!G$wbiv|9v#(`;P|sV$bXtb?cp$45J@RI1C0!uXr++QqjP{T8$>`du-F0-Ku~82O z&Ag_MJ1wpEa?qNmzbnXNq~I@d9YtR__-5)viNPL z?Z|cAwDYyYmD*8rFcsF{KpF-fZ5nX$K01M%Vy%w&VbTwrP77Ks@)*;{xwB}Hy$J7ncxsFYT0~NPzpX zY4*bG*m^tKYR%sDep~P%59`}aP*=GYDy_aaCka)27W0)FWu7V`!1*&wZN|p?W&+*Q zlJXC<6MW$L_gcZ3vRz5kW5SBYqLL<4^WiLZ_^V#+nwEaGYPjHm3JE7_VYND&l&B1 zuu8xlgIL-m)<^o=cpoq<^%RSfeDyCiRa+LT*vf>tdA}Ajl6c>1%s9ZfO1me(&&&wF zn33coyKc2?K41qq8J?iF?zQggCYfCOYzf|x<1u~_2k+1HDLd+2*IWEXrFP?dWGp2n zQsq(qrpM&)@(GfW{A_y$^uFU2<&}Of}pMh)cFQdmsHN0_`84NOr(a98+jT zwBL(-2Twr`kn*Hbul~%%C}@pREoCPy&1r4xCbP2(OiX>&4c;7K(yU){{@wFVxosND zi(}Y?q5m2iPX11lqMud;(;F>}379)I{HYe-+iUSFT>-79S$T8#l<|Mhw^S8f$GutD7AnhG>F zFnzyF?xnpP`smcONb4qoHM>uuW&i?Y^`X8SHL#)gkN`_JueMWN<3P@kE0?kpH|CXA(3>phXT7>6QEb=p&OdnT^u>K4~vHf>Dl zxjoSb8LYedb`_`Z(qryjSzfg?E6K@g#-upV-t9Sod~gip447>&n4l%9s790 z>P+3auc=AA{M!>et+*&W?uz{rwVWvKR82KUcSOG9k<0L2Sqygs2H)8Cd@*0#r(WJ{ z*o}O;V7y$rU;#ZNuBU0^nx1@C076wy*Nx|YdGocTO zykwuBc#h@yi^z8kCY6nY`6WR=_-Zvs*#Hd2vFhBnPEV4vC7R8r!W;)z;K{w?`Z=Ag zJrbBjMVaw=VY9oFW6ADH%E;N(MWb5qrlj{so1+(GB$88jSJVruH};yh(6qLA@m+7j zT9E%b^WA~#Bwfq1xort{BL^WQmd4YhmByUiz&$+5+PGE8v~5(}kc9TQXf7TpxG z-5Ys|Co_G|7%dr6lIj~xN%+Co)=eL~;e~_9{5U>GeVcXo%8mu0^Xi9LNl1av#1oAH zhY3WdQJiKvb5b3tkm@_F0M61FZ=lil8a}OR%La5&q;K1Tj#h7%ri(Mvmt_ zr{k2{79nA2(Mb^D-l=!wKMJx3bZV(xPvc}7(s-n$lw|wn^I4V+j`}i7LOKPHH0!2O zDlK9S<702kinqQiP8%5k>996Qm<()wq3%UL98Ab~GM|s#KsC~#3va;D*ygrK z#EX@P4|U<-}5_(H1ZLysJT_nD0tO9=mHMYV~5J^Jt9iIyfvc=5oi=l$DnJ$tK%voSDMPQ9pYY=P%lqI*1`;>%vvOlh&4$MY!mWBvQ``0M)<_e z9BJK_`@0x>M`zI99yegNvSoh!B%{{!w9S<=33MXs*1-wL>)r^*;!S)uTvHk%06wt|Us+%flA-FMG1U9+6e@%y#>7@1cXn+exrw*4%f%NL`uatozm)6s{VXuOq%8f zzl2#{fw__6>6QGt($l>H7Pot~lc|*QgY^lc+KyJLV!O*8&OtPu`licIy|K@$#Jf`( zqXU9W^@7aA%$L|MB;*D{oU0Q7WaNg(R&Du_Omhjr7Ts1qqn&CKkb!aDf_PYhr#yDZ zDpGGMF8-r47owN6^w2yaLg@tjSaE-u(|{eGzL|OB;2^yd{qBQK@sbOPM{n*e)WbgZ z`cg=%1+{xr!-6USq~87-;&GXnqMURyaVq)g|Imbo%sgDAJ8={g)gUlYy${K}STk6`s1_m3&>3r0J zxCr7g7_ivRZMuNHl(#X?!@!C*Kd$7{8|p=*pbBz>Iw$eh3uPB1_AE())yNyVv)+~@ z^(_=mX>2}o=1(9BYu9 zvUD_L#T}KII^1v-jP+8jEXi5VpA-2ov8JGcdQ-DD7B`erJ#OaM6VQyZMH$;e>VWg` z{YM0^bNC{{LXu+>fb0e(H~=%#$`jF#B@M#yLxO~tR>>SLVjhR}EV=5f%lMgpO7geKZ zn4H1|dJ|GP;WiV259JfzCDg^cjdv;<6CYu-CZW1zv1TOu;f5T_9(4(y=esm*bzZp^ zPF@?!?&k0xz4r-HkzMVie16h5Q;;ayP9@?pOWW)6wF2y` zQ@+(1%E&sE%Kd-fG=MKY`U5S zS{J+ENvk(*$iT*{YL=vq)a}bX0IMxR6drAips$)UoV%^=H3H06vFm$e-UesSmUe~O z4Q86iom7S1`)*cpAWvVPUUj<{mG!4x5zSj=RZQGA#cTX9Uwvw_80ttaw}_{=o+9BV zPD6azeBu=(z?7nPWADPSX)4DQoI z`Hw1t>&i{$tNBrR_k9P}Gu+u*_2L-oCn8rQoi0dQTrY_Msj&*+-j5>Df;G{FlW-Dz z_LFOR!Rr|q*NyAx!PykbSt_ndc^X>5%v_J0&)06-Z&%!SW`DF5KB4V@G9y!b?=o;i z;vJOo6$%y-S6U^w41Rhwt(SHNp^|s-Y;PnupbZyIOZ2|8$BD<_M@V% z8n30VPO}?em1aMp@GOY_)g+Ox13he^FRrYcYM_p;)${U!q@*xav$l6p z&(4#_iIKin0-l@L_zE8MQ>Oe|H}n><2V92(!~y#^EP3@ls%sVvYytsUZKWSIrx;&A z>rg|#;aS+Ow`cjA$-j3W%^xas<#j#TY2bl(t;-F!hj?26X5m>nE#-WrI$pvz`bxps zv$m5!o?Tc};`-h!)D-VO}B&%&BaEA{_Z7BMI6d&tRU9YQj z-yopZn<+!v>yHZGh|X*7Y3a$oru+bphCG_(7SJr zcYB?6n7np2OV-RZ)4zGx>;)L$w?)TqRGyK! zJX4XTvC(lUad(Y)YUxQ~l6!ivbx;2uZdlD`lx4fV+}zF+mBJF&Kmr2#%Yw~vd}x8r z(Fn}pQICk)FymEXevGZLb#0v%{;FSyFQVeW%3iq6UiJcDq)8)Pd)Vf`)=IROy1TK! zL(OEDqv}4dC?7ZDix4-&`*6>Nw~GSL#(TK#<0iWdKgcu^+WrSC^FI%d&?&Aw{XRuy?(%APJKiId2ml?+yV=m?ff* zHyN0eSpbKfSpF$QD;q2b2@ZAu1 z-CPi#Zq@rg?dGdk8T`^}Z5oJS)>9D+#+2uA`rD_=kb3ltAxjamjP|0rGY5>$K#e0~ z^u+GKDiN*S!hMTTd8<1BL_bH-lD06Y>|&q>9f>82G17 zBWz2Q;px(b7xf=duy5Q(;WGeU&#|;hjXA~LWoJYvIi-RJGM`}{&k>A$T$Ex)_c5yA zg^eaBMc~YIv0QRPeDi6YaoVea*@<;rCb&QrDTnTw-DBPjH+P<+A4?G^Z5ZoD7sR?n zrd#+LV`YGQdh&c2C9Cn##=OKzf)R$9s*T$%G}sRmednudw>2GN+Y7fX6V+fMibM>j zZ{)(c^=$cMb_gUR3Hw8F5%u?~E9Y2x>jp5ZzHygx4fkifD6)taaX`Am+&>J*OA!%+ zeFO5EAcqE3B}vVv(fh;;X`VOfk#fnmMMfM_9Z`sLXjl%HDkrpYRPCadYU_3$VW)QI8Dx@@#VRpn znfbu=Mjt=wn{f$xN5jr9s=Q*SPnWu6VupHJ9290VVdCS!JFJ_T7Eoi_qy(K0ZybA$ zwK?Dvo4=FQVa}sX)Nk>DzQU1prADduiwiR1)VpincRuJ0Ziva-2d}fE>fA0|Pn=`A z+%;&l&g*eVc84MMP#a#PyKysC(r&xPPnD0NSt)lJDy{hKdLW+97G#gs?Ciy#Vu*;6 z9o}YQ-C1#EZvkmWR3n>e)BBvyvl*4CJS5*XT-#$jTVH8=@-w96!nSkwRyg)#iG}I* z`~eQ37e(DUjANWUExaUgrE%AqIhGB9l0C!A#|BWIhK(2MM6DZjN7>JyZzJp?BLLNf zJe!VVPCZT2-mf$N^ldc8LBvNsGF(lo?p55L>`oo=d%)l#oiWwZhJXiUE?w$ z^7DG9U`TsJN)h}m&xZ!IP|$X*dUScXQx_I_)1=WPf4H7lf3ce^9-+F?cE0T#P(`gj z3|c3!=yuz&uCCl6kZW}sq?Q1v8-0U{Oj7+8)X-2hY-ipq6XL7iFG!$%h zY=>)n8C zumShcnQ4INkJrpG1s?)&ZOMyJ4So`xgfKp&ev=Q>#>^`WSu|+A5Wnl8btysZwA91a zYx$ge_T|Q}vSEi+JVh#dYqi;RJVkxraUi8rD&V)G=sQnUDES`iZ`0-v<}RNWkuCbi ze~$ZUH)+m<6E&mY=L_FljWTGQA%aGYyWLfuMyIGzFJfp4TN-=m#$nbUt>M5)FeAc@ z;ZX2o;d*-CqNwF|0~8rAZ!#Qc#V$9$kjuLi?0-gHfF!cKZuMBZ&R>j)8+kZ77R3Hh zL6TW=Y(Co$EWh=wJ^OhuNy*gHx01BsGsKoIyn-ZUU?Pj{Y95WdHTaD%u?Ux_## zxC~3mTbT*Ka5;vhc~^H<+sH?b2XQEFH-zO%dEO;-Pi+Q${{6$FXN@d;AWRn9g?0@; zKvDOaO}k3QEK5MQl-V$@aSe};4=X$Kz@#F$w2{l;wDc;Vy~?AG0aoVUBj9b1u&lIjFC^`t}Sv>tGC5!;C}<#3<3x?WT| z;93^vZ|ry)-!R8Tssk{qiji|DR_}06-9r6mWK95mbklEWZJ*0xeb8duPwuV1vklhG zpw{qDEJ{28xxaNo4v!Bhh?U++bR|px^=Cgm^er;w3!Ax*5{B9ZEV|lCA2TGl^ z=1m~(xA=aLkP_=IO9;BWR)9`d2aTO(7S_Y-KDzNDvvZDxUs&6Vgr=@`bLd@iH&Alm7j0Inhh2w4d}r~18ZD? z?&U9fEhZFD*PW7868g?Qi0uBu!!ycxF8kS`W`ir>NCWvBUcL-b&-0^>cy^J-{iCad zW)YFFp|kL%p625FOLDGeI~Vdhq`?*3QZUC!uj{=_lRO{u z(r%VFe;;3Ce|wtBR{yf5aqP15;1Dj9I3!QOu~df(7kLchYUnj(bSf8cv=XxKfk9hi zDAyD+E@k&ST#J^jxS8gOxj9QUhO7?V5i;M(a2l9JjvA9>nzfP)dimrX8m$LpKVg;G}QWczKeHotDXgJ z$%&nlUTtU-nen+6(hq*@*KsE5xpFwmXm3-s$D!A7%DisfrqN0jG(F#>Tb{Ij)Y}Yi z_x9=RWjFAUe#fJJAvv7fGnKv=KbW{>%{0hc0N~q6hpcN<3a`|JihCjE7uHNNaF8y$(42^<3$EvO#nq`__ zd4vT3_*2cx?Ug!po=~i|2Mb6?>1gBv=T6yPP3%TIg*YpuYHk%sy?fixMBZOAGGCpG zqPUVx9Y{ayxLOOKBo)%25q+>%})Ft{3_ewlfI|j<4HJHogo-5?WNK zBS&hZFzrg9NS6oaZ)ImZ4>M4H125eKJN^u`?w(Vf#A9Onf2%TW7zif_N_>&hS)Bm6}dIL>C8VrImt%~Eama{l<)eM1(nt9Z4Iy|qGEmhG1w{G@+zl#{qs_}X=Mp)5RV!dr` zG;6atbgdHD;8?d|M$T(xw{Y)JiQ(i?olV6T<(G)>y+K?J8dL5D!?~ECX==7yQ`@=Pf~Q2c-flMI^KK})^^bcx`7kOAgN%SZ#axSM1zi@`hn>Ph%}7XsgXg~DiRz)& zoI9U*Z3|D>8i^@|I|8w?N;pgJ4SG3jqc^#PuUuVOvX#n}PVB7Iw6Im1?SB0Ws@FHv z_FC9|TO6irO1O>na&r4?GnCo;Xx)Aqkm&bO;!Lk*FLwu=c+IpEAE{DZQ8rbVm0kC= z3S8YnZ-Ja$$hy)AeM0LBFx7dUd%GifwdlDpeTc);+v1 zHP4Y~BG|w>oohJz+$s|^BVeEZov?o6&xmH>>AEd6U4-5I^EnD?wo{tIZ8Dr->niH5 zpW!B`UM=JuXQ(CEo*bDp#xb+fXfCCuC%gl; zoZ24VqF$HN+Wt9n!{h6t2~1!oc3tpQn>Mhy`h_0tjVNyVUc?X!eDEfk5#As{CD!rVook;j8#FRC*L_Ga5 zov&Ll{8ysKm>j~9qvZ?@<<1uo z($H_U`WM83X%>X>1k?IWZWfeAyBU(^^;;?lh!Meh=^161?&9IF%VF+tr7{h(jVL^o zjw;$w3eZqJcxVet&KWL_waZ|bRc@D)V-sw>v~7fXx{)jAmYiC6yCh8?Obl^gcFjCP zt+tP>PM3lA-?Ciu`9Gog=O>}^( zaBJw&_=vt9p!N__ZCN9SvYBAe3Tsl+EbIefJi)p9W8zM)_$3xk@?TKv_(s9D*UeAw zFGh-z?q>GjGcZ1}L9M;(QOl0oL*BYlK>p72l{J#&4z8v%KWHv=h;Yr>X zp*i>i14FP+_Jf#)2MoJAJs=MgLp1R6aTa)~jUM4H#F>5T{Z6b}{EM`kIC~NE!u45_ zdtvv(Snh}QH%_lQgJVs5k8cI>P1Ntb|J3rZ)5xpNIe z2)LyAqt}^{ikKqbtBODU*}wRWzSU8Ro&r*t3SBUNvst${q<5`}&SX^H75f}y!}P@E zXX@X%RlPDDuZp~>?SXtE*Yj5?e-cl@1iyU#4M2b95JstB1^(_FFb%VT$SF^2z~=kFs$ zuP!mr2V;OgsU6acm&$aI&7ZZ2v~DQV-ZLO)_b|UEv!NC`Y_V51ReXcB(E&L-VJPsi zq-V4rVzv5BjYEY6w$P;{k2ZdTW$&<9S<-z9GfH}z5$u7|i`nnNn8opRe(ZVA_s{xm zTHD0=G7ed<(ergZ%rHXd(A?^Ajllr# z4}#fl&SCjL+O_bez1uzSJxDn1FUp5-DAwi9yBLi3=o`CRj>!^rNufMT*}3oDgq_3t zgzS6&_+ntRtWDphW%;&0dw|hGZMokm=C{B7`6m`W>fa1be-Fj@0{CLb^zY-p^~QsZ z|1(nxde{HgA8bHf4=bnbe9#2+`(>{8=mgj3#Q{4TZB7pG9KFm8em3qtHg0MT#p0SM zcZdnkPITd6=W7zbj-2&=p5C(6^xcnJ=Eg#QR-Ac_~n|U7dnP)<*t5m5@6F?++n?r);VaDF#^Gx3^bB-6x;}$OGYPE%z z24d_fgYISNx1!Eca6->Hk!YG6!{l4v#wCWs0-^fQ0Qj7d_eFeFE;)&P>brG+G?NP? z>u=iCuY3J=#3+qNfC(CXkii&h=xZ+_nf^H1EIqn9<9C~xBy{yRBQ(s>By&Wk?kY&S zcOfX!J4jj7thCQ*5+v66h!eYcS_rPeo6gOx;24$n<_8bSskWT{7){ zupuylM_A2jQt++P$UXcdJo}4*Aqo_{uK#2-7KyMTNY_$3NKnta-8z9F> z+8}mnA!p;9RW(#bhdHlmX>7pjt4pMFq}AfI-Lpr;3<21Af@fra9?QBWq8F-ssiBE2oYY!YI1vUn2L=MPzl8?8#GDTOxs&IkDphx!D&ojca!d*NNZ*1vB$zIW-&k#!mw z)VqWIX;wz)=3u=^qDe=+#0_Mr{htl0pq4_at*aM5dslBsg*BU-$|j-@oMvu7Vh&cP zLF)_5w_jPL9vp~)ya+OXnQK&Nd`A>v7shh`Se^GqOJVAwa!en^e0#4|nlZezzM&@* zQ^YE_y=A8QLbNS~)+%|D z@pbFasHfSEEOSHML*TuY<6JPI{u;l z3S*`0mj&?@cZ^HwOZDuQZ%Q+p)~k3973A~P0~J@r0V6d#-h?w>mAngiw^vr?+w)(f zKtN6=irFh%@<~qB(2ZFxbFh;7zilz^=ZP7Aym8*r*;# zS= zDWI>YN&@7?2>7q@L@sL5+_{JfIf|}IEv`q-?|)8=J5&TOeYIP(^`dQ+a@=57r+C99 z?B=oJlXH8uPl?&FWExP&7FTd1z@_g)eWc3SX|m2^FMMI$#jyTzzYiz!&Kcdgq3!bX ze~!8>l=gmL9k&RmMA5S~ikAYh({C)_$vn(?&u5b2T^KNzMpUVS*L>VIhqrTJW5X{7 zO_s?c=DPp@;@Zs>0$VcDr?KLFX{SoWbk1CzuI4%liD_kAYEqb@YOG}yhT_@-^nS<1 zmL^iRGxHVsA3%^`W$#bI7v&T5rg@`0OAoQts*B>8SWyOB zFNe)B>lCQ-fYeL$!QD-V6M-_-g!08|VrKF&8>C<{ul!TVg}F>XO)-zv{AF7^jx0BX=nQDP1L*48B3YIXPhp% zkB+7AI~RL!rlIgI2}VU};J6o}~}sQZ;Xt{+;L z>dIeXc;CUC?Pru-g^A0ZkffkU@@2pC-xu0EobQ*>&=0=CT3Tt;tH8L$+2;VCT7$QN@|vTk z%5MO8LkFAMTO%pOSnXj4NCr~Y+kzrviH|R8T6d+buXk*goew&!tQGdjY1jNIOu6S8 z=9-#YH2d%l9PtmZJ`&fKOg=L9P55KS`M|3q5w0;ce*EK2Vc-YSkeEY*V5N>1kNXd8 z%e>Y4Q`xve1a}HIU;eO0L~E$1I7QcZc6iB^D#%BW6K|!FkW$GE&zA7O$TMp#iq*`9 zgXn>Fe1d`kimDoG{n_6p^(z!IpOY|uHd!}gthWxc2!HBp>E-DT{@GuYwsc*S zU8R$M<8d#%JCDzax^t6_&TUyrp^&?9W9s^+sw`HAbc_bEK3inYZ_{}hte$ri z75Hg z+ojjGcuR{lUQ4QCs@@yN7qPtsYZ*SEvWT=!iRO`hK@?$}j#$Ato9DR7)>m~9-VA@Zx1z4CzhA{+3EbNUJyrj6@QX-~o~8%!c@ z?t$~KqIo-|ik(a|YLO;8Ux|#|)cqA0-#k#vRK;eLGk2g9EFK1hu`JTl7Vk@$n9&Ck zycBqxv3>Xx+scONcb{-xqy43Yj`x8%gS0^SLJf0cJpF1o37Pa+pSQnvh5*}L{T zk!Bv$3hqp;)gh#*DPcL$g3JxgaoD{q+^eR9s>DeaCY5~h`lLu{ZB?zN>?k2y!i?vR zBRR$c`%OyrmWev1k3lK{ z@-sC+DF}ZV#(ZFGpwQKed51)30NbGge_sX>^MzbBM^1kZL%u1@7pWC*M_#c%S9)VL zQWZH;OfadXWFGSiiMD>;&54Vi?{{N~8^4|(WAm04>u2;4C$DRjTso53Ij0-Z!_Ot(2a619gM+@ zeJX9Uz?r)1IXmAle%m@d;INfL4o(=#i<(phHY z_P$LiU;dshd@?rB+z7Ip(`0F^`_^27T%W~hq^{D`H(?n}Wqz_=i-4pO zQm`{lBtMldrde4FpyzC&DWzvnMP-TK7PHWmz$sV1Gc;>`tY>Zl+hn~AjcERSsCQ`3 zpw;V>wu?jY!O+F6eOytgAnY{0{7q#N4*jiL54v^1|N0l4uisTPTM)iyH^u=~<*W17 zyYLDf@pIKPHCl?f4nTZw_Fp?!eXK>#Li3zs!EES@4=4raaHe>&CSJ$UkoOrH@+OCl ze+Y@@oUyT&4?cg$3$!Xh-jf#d3zVYW$PL{(DQ({CJo1(@JG4FfAXqTU7z@aw)l4CG z{+VI(aOyDac}3Wn2p8{#Lv6|D+c`p!kdbcGk+cxk0wK?K6^_qp31ozpBf~}c2ULc9 zB+|(&tQokShTGQcv7H54prLThe*@U10-(NWrm|1=403K>GJYW!B~10<};q|rsN=M~1lc(LYm!hS;3E4PqBtk(Eh()b{_ zbXBB5d|7*HhAZO=t*H{`>o3z!$+Wkm-D~8t8ArrUw&_;Hmm{NE*U}sXZ+;&A6FOo4j+5^q9VKwua%Dog(Nmk}`b!<#m+{u*!+VjP z!G%MGLpTk-{_;2{m^h_gFt~8P(z-eUlKH6MQ|Y6>b@t_J-&pUn%*m*N>52_){D)gz zV~@G&zF$=tiolv)Vw@g0y5!TErYlb;S{$`NdV^f0EGQM0*(zqUl#E|*M5Ac|@WVid zT(xTVsjWmcPi4u1UE`k=BBO8~7Gb*{L`%v}BxOeYxo+!NZpNd)KUDL<2ZOh|kXu~2 z1O0*P@laFWGduhX)OoT!BE3W*|0zN^q2P$W8V$nQ&z#YtoiDn@JZk~ohoN)fL-_4m z|JvkF=B^R7zS#-{YjD8h`J+NRCO$rm85~JuEmmpQ{Q!HT7lb=*O@E{S-DA643)lL3 zb-v7WAzO$gP&?>rklI#cw5gE(K|oAjBv(!SmcmP*d#=jg#7^ic2|xA0L7h|4@Lhds z20_WL&TIGN4wx8_VSvqCanR-SU7L6}BI$T~nBg1e%Ur5!KB6WQ>qJwZ&$Q0M-6*jA zo{#tMgznTqkv&U?3m?#u5|%}8)1lLdU4d1n;L2VP`z=2^!U$-jPR`&g?0@(2KcoN_ zi|b~*(F&-WDFnYe)sN`a+KEli=1pV`Yl8txi!QmQ@$N|U#MKeqNymto{+Lg1BVXogT>s%9JX1xtlQRw(Z-Uh*IHOI$JU3$=TR(5UH@MfSO#BdiQlLT>QA=6SB_TmC z%$oJ|cFo6B$0E-oQdE{)0Mr~o+w$eF)!@ljv6RXs@id5v=KicaH{jEpCM&Ta+G>{g zys6KJzX3~5k_xug1nVjf}3 zi$j(sMZw8yhsu=N-@SptF-rP6OoSe;RqV*H@{wxh`9gxogNh}*0}_l!>V9vbSvN-e z?JtNxWS%?WpDaLj42g3lg1 zq%SJv*LHYs1@+!pf>cE>K!FDE zO8y_B!b4lv_H`*K$(sXo6Qn#-vKss#JJMR)nxzTwYU;p>v#+ok>1Ui8%?U(wS8t=+ zphx%h6i?WgJXzp%VklYL@(5DGi~V;o$7$vDUifrjuf$>ue_9KcamnS#mCJ({hpv!xKSw& z9S}T6Y}OqctjFqppY<7onrAL{m=TAg&Kd(Y-zmL2gkB&c$*>?1a6e zfp5{$VLmm(4}y$Bf|@zA1I-)?&jw#=F`tcA+Q|C)AgG^9kPqC!rDc=A^J6v8T}y?o zw`Y`XrT;N8!YP!~J)_x}g1vPgjXB9Nu7Pt(f^Bp*f= z+bjp@yxSD9Zn0>0MVs9EJUd*aUQh#|SCR`<%wpRx4IPL%>wD-POVYLfy;+14sTN&3 zKl}qzmr<0vv>7(5J0#8eQW48`#a|*;tWmDth%Us<``PDt7Ny0t?eyw9@v1JwEfqk_ zNvF=Cl&n3UPrMVQ2I0}Ei_4bmxBK=XT0OR*YG)?AoXwJGknB(^m_#>j+aso~I%QH$ zf>5@h+WIIT_kxpSrRQ194~^%ojFlVQvJD(|aV0St5#X!aqVuYdhEeV-46S2NSniL=BN!@#_;4$gJxK7z!knh4_^g z*zOHo&aDVdDIp;|b4*haNm22)M5rW*DW8EEl0@fF1Wf_(aQr)JPiA(Yn0SNq{rP17_(5AIUz;f{znR%f zVq2{h3M*XmK!|48P&WaG0 z0_fIXk|^of?)XCLVS=6^Dny1khCR81FCBg|USLXS_Y~Oi=*l$33|A{>@Eb5NjW}P? z_I$c|ns1r;)-k`HwR%cI57HbA4xwVa^-h)RFys@v9DP0w(!>%$Q!1m7;T&&14$n-P)IDG3&;P?3{>vQwZ)^AiQ$8Ujj|*@s>}QOOE-h=Oe&U4-J4#;Q))J{XoqCPdk->lvl#A zc4WeSvsT9xxiG-Rr?vvi8^>c~im;wn+d zX>cLq!h3oM@7s!6TU5ko$cj`;8Pm;xjTQZot?28S>g{+@HhzMmHe{mm_Uy8ozm8#=oNbNJA0|7iMQe}RrM(wlD--HWkuch&SZj7flJx*i2d*Ps zdMI@>qs@Tanl|92{#8vsTBEY?Lev;Fn%7WXe&E~vlUd$1BuB}rhP#~aP5NZ<)D}ye zWP?_|5HdxQgt@vlqFY0!3DiL8EFw~+%{>uUmhdE3x8av6NGy%tIUO?ean326Wf^)w zP$e?$;wmD`^^c2R0r|8-Xr)B7JN3+Rp><*KGS0C(Ss4Aw6VAV~uvk>7 z4K@pdu(7dNzJ+A7V|lBS!5r_ewUPMv*E^?IZ?;@9V>UrCTyiY136G?jCClKlleyI} z#i+8&)AS)Ux4FolJn(m zEMHU0C+F{$P1&v&sj#teoLQ=oS#JkufS&P__pPNtOP^r+ z1T#MlWQ*;0M2^i0yCF(wt|T?Y%{~b&m=YEkXwD?#lCQo~seI-pTwd`N2MH?uhxOk9 zCFcz~i=88_`PTy`*bQEd$Epe0zM$k*QbCse@^{h{IfC5R>z$Nyo{#BxU75LM&`M`;~tI9 zuyj6Kc`G1>8uxmDjd89q`X|uu5nO@*qvdq8Eu)4cBwja61@~VJ{&zj9d;#Q}#4NWq zYcVx;Y8c`8Z*+C{?U(nfg>a3Y9nZf+jGc7%zex1o;a|S2p}9;&!4qpghvFdV@qc^& ztw$HnrTHdja#pk40=k%3;^^rn4!m3-r}uSAXiCj9oQ+%Ka=3E>(<;ehLZLBq|MSpa z_L%UWj(*Mml0`i+{oCV@k|DrA%?2%oHy@og$)NL_fOh_N`CHLrVeA+GW8?#AF-8?J zF}_wTi5|P_-#@O;cE@9=%nBu&hN|=V(qVn^W!3J;xq)XI6-W5hp`rn!jnmKD4OFfi&uue^N`%a zX2eO76o5Ttb+vHdEJ=9SYK##^K%>tr#Kz&%0$nzi;JRbN;GYZ>P7kvGZ*@RLn&@eF zh5JGn<(1TCBw8*|acz0o%+G$bDOw2j`Z{ztI#Q>Zf`$vA-DrHgSOJR9>CX=wBV9rF zH_y6TNsW`o+{S)*Ujf_Xf2?U(Z&yS(nzCO5mBda zxn2ke5mde@I_bJ$Wp<+PRdO^BBfA1YOW@MNmuhIS_kTnL0-RD2L0)DzXw7kEU+V6U zJIRFMr|v@KAC*%7siI4@w#@?0QUY=Hvxj|~KNT5pan)bbGJ>Z~qeO|AL`YcYELx2% zrUpH{R4r?p2c%wpGG+5()c+AL$5`QL+N_nEu@J+@rmtZ`zsJQz9NMv>F=696%p*OO z^4WNBD`-D41Tt(@8Y3Mjr>NY2ToDgvSRsGr51`}YMvDpl+ihkVUv<%Ob~VV`%0?jk zE0VbNVxhhg8p8*1&Og|$VxN$QE%)AR%C?iu3{>xpM7u+r*A7Ww>;8uq%D0~L-#0fa z?U_Y(ttf}$ozc~-ICC%}!fC!GUx_GYM)r}xjnS2j<_*sIF(>vyeRw*UOmdg}%2Zbp z>>K+O^;&Lz_iv#UrMHLS!d%r#2Wk7BYmR(twWRfJ@2DK7PJ8yX6L;W7OMU0g{inwq z2zVo62_e?xsOGauW8$oe|=P|Esz43Tt9p`?y^J5he5@ z(u)ETgwRo>DP6kKgwTsX1VhIHxIsck0s;b|gertYYEYyIQiRZ}^xhF^!Z!ijy7xI3 z=klAI%rj3iS((gw+wc9)dTkX3U#whZ`D}h8zPa{`3H_y<8)xbT_TdCxrMlUrFGG}P z^jj`VPhnjl2_WKa^^-@&C%{MKXeFbs_0NDV?r}>q;m^-gb3$fStXw~z_jAYbY_V#g zMikwtkR$a?3W5r>uezr7mY+8;orM+hc&=!6wmo|>0#yeJ^Z7U7VB>uk($`Hc?{phZ z?-oJRWMC(dgcu7XD*o|v2}=&-*jH$Hy~69}tS-UP2yx>?0m>CJwvFkcG>P2Y%5%sk zIjW450D;UuXnaXG_l#JnSJ`*M&(5B`?#Q%LCu=^I6cT??NI>eDnc=c+sh-wPd_Ff5A#`O(q*l$V)ofjc-2K z2aBMe!$JEz7r|!&#ENGc^zpkexq%mdjrcLqt~gJ}w3_~eS(R+dPo_bSbL9#HGjUl! z22ilG`2P6G@sL)zN^qm|9HL3oM3iy>r`tE&G=Mu6`}eB*)iHdxXhpCrd&kwg5ii z;j{a(vA1~Q247UNtkg~jmQcOK2J17p+NG{8^QOqJe{=<^?nR6{PkDO#!lu|e9F418 zR0{L|o@7Ap-nQJg$J$5;ccA0TF2*9s8(v~Pk5x(yZ{OHPkh3{DP#X?dHwO!I@=`>K zJ2E93vStq><@R+&W{MQ+4GIjWi)gP%kD)8gB6lK~O*No83Htv;_Y@2~xs>2F}t z_=734{p@adbE8e=)yMDh`LX<9qSsfAxrY5xZpet3nH7@K*bLWTE zv~l5YJWD6~M#cC>;u_Q-O+C5< zS|OviQb)8lJ}|d)+xX2o`b7L0Q2B_|tw^DQj+>Fp202w-&h3=+u7S(uFY3Jc9o5rP zYQVAJ`uJo=d?YMqN*y9jYbEL<@a< z5*|$}U~*ksXWDpZh!5IrvjtHgXRJ1NG$aJ_413}bP9+Bfu162={2I38rh*w-y%uc7wG_u~G^ZJV0yY zf-e64y!bD)TJQ_4Zv6wTewRDAS{kPgEwx121KH@SqzLZHh;QGcxY1>!&%#3QV6>+L zZ@JE8r!0^0k8$!E*I%7QEPPO}MF^D&vshoYeRZ5@Qx{L`czcrU*d0bz$3DoeD?(yA zOJu|)ZnJ=;3z{^&KB4z)EN?M%mxzJg)ALoPC~eK9?j;G*V4Ob(*=DQQ@OE@iRHTboeC0- zDjRjK*Q7EQ&dMu%n#v|epWwjTqRfi@qDp|8+q;Z=e^yf$xNZMPB_t;JazjNCh3hBY z?D+iVSmF%i9;S`*@jkY#c;zR)N1uIuj^zwaQ9EFEt7o2ok>T@xNQaz>Am*C-m15Ri zKM45GVb&(B3^*2qgJF@`y^)vA53C5%tf7YghMs8$e6{jJu*M^EUbzJyH=JH4O(@4o2%c>Z~7Q&|1PSG7sgy4veLv|DGni4M0+4*zai+PP4 zOo0l>mAvvvd1Nxi;PE(h)NrJqWTC|6ps|*3x5gE$X|D~ALmO>Odr4P=8_ZT8Srezx zMZ5sl&gQZ8(3{&ng$0T|-LlLEUto#@Addu}vL$3;Lr|J7_0Dk8bXzk7Fzzf1O>m8l zclF@)h@4rOs2X^Ow2PE%V~o&hej!YBcXhPz`Km`4yd%b3hpQYsL?dC@9u$qbi?O)S z9kobukp7Y7ZVoWy=3F}Z!LlP`PJH&Zv6>0FMjx$hz~?3lG{3}0VsK9gJ#JrYT+_Rl zu-6doKDSThL^m*`6jxVCWXokhWBDn?3ic;RyU`q=H!#39?wTm?HkN(qta|gmkvW^o zEh-jGqkLXz}h_=1p^(bL$EMz-a_l$bo{76!-F+BKK)#tYceh|6;m{a zx4#}x;6B(sfba_$;A}lU(Cgqq#KjNP0!>|lx-+||xuB>-@dOtXdGIKSFdMaf=U#h` zqBCgqYGNxSm)|3F3Zjt+jComg`Ql>D=7b;r;QpC(xPYbai~_jI>wVC!?`{)hO{aKG2^Wt`>8q-(1rgOM+bX^D1q?@E5f04Bk< zZ`WyeHEhj%54$1K=>o3TzP=;HyOE@^hf(nPMxeRdX50FYJ5SGZVx1SUd3 z{Q@_lNVL$vaD&*03{(;}R*p|IK3(52Hw}GP;5YCmY#KUIuPZ1Fwl*Hnbm_q;y~hg6 zg(GF^G`&Cc4^ag(#76H?F5TS1vvVv6{klTyq+Olo4oB*ryu1Q5=O>@PFrrEAR0iUo zOrsJ(^kYrI@M+BpmbmLvHu@wT76}5$!1#`NQ5eO|;d7{A+8w(WK0gVG5B)Bxrid~%(A35H}f9Q za?{~oP0!qrc3vb?xrhwbEO1vWI3Ht1C5UX-NrsZ<^*W%$*7>=NOeoXE8#mMAH1s5K z%<)#57hcz=W!upupw+vtW0Q-B$xH`XEtG}DU*?h~G5$L@_iB(dKPxaQ0@kyvm_WK$#_(_byz)HX@5w}!cKEf?2B!AH~rFZPm$$5Ru>i={} zV>AMKmbJ~rd{VI1ZXk1{X@W30Ui%VG&A@zHIJE6;`Y33#cp>w@2M05O|G`S0Dx9p? z+TXhq$|Ah#8@Vp`o2xMWn&+M;=YG4n&5aa4DHjCF0o$%T95`X#VLVe_f3BNNq$uZbiyxC z()u~N-q#(rit7Q`VbOe6^fN_G72h&mdtnxz*XZLS99b7tJoS&DPR2|UMHLETiV8%4 zcqu$!s(nzM40$n_;>!xjcoF;8T!G>g&q1&4kQAeb;p!E$O}ImLIIjn=76!@KowUyI ztC2&-k&uM^%0;1DqV<|SZ%}=O@wg1Gj2Y$N%MdUkvp>y3$`IRYqv0PQ?vawC&`#PX zb3I3Z8>5;PQS0XDKDL_4VRIqQMuw0YqfUMG&a+$2&~>A`#vh&Ulo>~rW84m-fGr0F z4A<>ksqr`cl7PS9tK(e@F0c81#7asPTn|x6+t*SK+T!2|a*v#v^ZYn>sfPQ8&35T2 zf1w^2Vz=4@y^(Q+XikoYXf|(clQ!E(CyN0dT~0LW#xOgeh=%cvd6LQ%;al$2|t3%IIR}B5;!9l-}Y8m zwymbpZEYLg_%R$uYD&Md+cz4o7`gZ@+qPd4v$wH{z!J*p4nmW1Ur;i>8qYApo961| z*(Ovkgq|?a+P>%ZTEq+rAR|FhaaepXxTxb51^;}L3tffc-0p3wg?M>Og(<#^hcom9 zBCH(E!xvX2m&25ImkNr`Zo=U*p3aHFbR!*SDJdJ`A5W!8DeUPry0T4+yNWCmv-IU>B>Rc6zTJso+9WVWx14L2G%2D=Vw+8Slg zh6gx!iH0l=){OhGIr={9|B_Bj7G~1_Vb2xnzLj(cqS%W^ONdV!r~;PYYVhu-P{LjF zng}C9)a)*8Pk?65(3IKV+t|-=xf&Y=zh>XLcrqh7KXR+tEs)usTZtcXPNr{X5?uI^VDYltQ<4~?$b$&>1M>NPQ#jb8^GXA){g^Fc zO#c3^X|FrxI>pUzQHh64y`lzGR&(h@DX20QE!G6$c9O|a83!P>Xw z=XdHmHBi;Rh&sCrt3K@_*AQGyd0XvdvuW2Cp1TG*OK^|8l>6-VU#9Q(7e$e`)W(VE z&CKCLten3Jys9a4=4WGbPfWTQ36xem*^KAQTV60HX~s%}^s{d`Jo{+IRD12+!@4XxlVLk&O+-Xso?g3i)hNkF_EV?VM% zc#>SvyWqCMF5gdX48AORKmL~FxTY6HORw`gl{!$+kY6l-6*PJ$5?y<_yXY?^i{f9l zZYh%Wp6Vw-4eL{6U(x{Yd43-_fC9399u5WGYS~w~p1rzXlgOj|0JGSEbpLYwkIKLG zV;(*S?|ZWkyeNEnZ42|YF=(3~fxT(_7Wq(xu38lf$XJbOF)y9KLT`!RNdEQ?< zP*58d5o(B5CzDfBZI3ybu{+_j`|x`phn4q#wW4)~WEFQ~1uO)5lJwvIzbx-{R~4 z^6q#Kg9LnYxqOM6WO}p?y9RNvzH{QODIhEj(|kXuKzZcfzi}qZt{<*ySP=~U@=NZj!5%0o`9Du0~h*iAvQGMBM49wf&u`ukR_FHPpC zaU6HfInONRy*ceOSGvA!)v9ERig*MgrwH9f0;k?lYn2b^S?5~z+}dG`}OGmNw6 zCw^=IBtBj)J;;zMR>!{{woASB(Gw literal 0 HcmV?d00001