Skip to content

Commit 927fab6

Browse files
eunwooshyunchusungchul2vinnamkimEvgeny Tsykunov
authored
Merge develop to develop-idev (#2727)
* Update base.txt updated dependency version of datumaro * Update __init__.py update version string * Update requirements.txt * Temporarily skip visual prompting openvino integration test (#2323) * Fix import dm.DatasetSubset (#2324) Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com> * Fix semantic segmentation soft prediction dtype (#2322) * Fix semantic segmentation soft prediction dtype * relax ref sal vals check --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Contrain yapf verison lesser than 0.40.0 (#2328) contrain_yapf_version * Fix detection e2e tests (#2327) Fix for detection * Mergeback: Label addtion/deletion 1.2.4 --> 1.4.0 (#2326) * Make black happy * Fix conflicts * Merge-back: add test datasets and edit the test code * Make black happy * Fix mis-merge * Make balck happy * Fix typo * Fix typoi --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Bump datumaro up to 1.4.0rc2 (#2332) bump datumaro up to 1.4.0rc2 * Tiling Doc for releases 1.4.0 (#2333) * Add tiling documentation * Bump otx version to 1.4.0rc2 (#2341) * OTX deploy for visual prompting task (#2311) * Enable `otx deploy` * (WIP) integration test * Docstring * Update args for create_model * Manually set image embedding layout * Enable to use model api for preprocessing - `fit_to_window` doesn't work expectedly, so newly implemented `VisualPromptingOpenvinoAdapter` to use new resize function * Remove skipped test * Updated * Update unit tests on model wrappers * Update * Update configuration * Fix not to patch pretrained path * pylint & update model api version in docstring --------- Co-authored-by: Wonju Lee <wonju.lee@intel.com> * Bump albumentations version in anomaly requirements (#2350) increment albumentations version * Update action detection (#2346) * Remove skip mark for PTQ test of action detection * Update action detection documentation * Fix e2e (#2348) * Change classification dataset from dummy to toy * Revert test changes * Change label name for multilabel dataset * Revert e2e test changes * Change ov test cases' threshold * Add parent's label * Update ModelAPI in 1.4 release (#2347) * Upgrade model API * Update otx in exportable code * Fix unit tests * Fix black * Fix detection inference * Fix det tiling * Fix mypy * Fix demo * Fix visualizer in demo * Fix black * Add OTX optimize for visual prompting task (#2318) * Initial commit * Update block * (WIP) otx optimize * Fix * WIP * Update configs & exported outputs * Remove unused modules for torch * Add unit tests * pre-commit * Update CHANGELOG * Update detection docs (#2335) * Update detection docs * Revert template id changes * Fix wrong template id * Update docs/source/guide/explanation/algorithms/object_detection/object_detection.rst Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Update docs/source/guide/explanation/algorithms/object_detection/object_detection.rst Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> --------- Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Add visual prompting documentation (#2354) * (WIP) write docs * Add visual prompting documentation * Update CHANGELOG --------- Co-authored-by: sungchul.kim <sungchul@ikvensx010> * Remove custom modelapi patch in visual prompting (#2359) * Remove custom modelapi patch * Update test * Fix graph metric order and label issues (#2356) * Fix graph metric going backward issue * Add license notice * Fix pre-commit issue * Add rename items & logic for metric --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update multi-label document and conversion script (#2358) Update docs, label convert script * Update third party programs (#2365) * Make anomaly task compatible with older albumentations versions (#2363) * fix transforms export in metadata * wrap transform dict * add todo for updating to_dict call * Fixing detection saliency map for one class case (#2368) * fix softmax * fix validity tests * Add e2e test for visual prompting (#2360) * (WIP) otx optimize * pre-commit * (WIP) set e2e * Remove nncf config * Add visual prompting requirement * Add visual prompting in tox * Add visual prompting in setup.py * Fix typo * Delete unused configuration.yaml * Edit test_name * Add to limit activation range * Update from `vp` to `visprompt` * Fix about no returning the first label * pre-commit * (WIP) otx optimize * pre-commit * (WIP) set e2e * Remove nncf config * Add visual prompting requirement * Add visual prompting in tox * Add visual prompting in setup.py * Fix typo * pre-commit * Add actions * Update tests/e2e/cli/visual_prompting/test_visual_prompting.py Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> * Skip PTQ e2e test * Change task name * Remove skipped tc --------- Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> * Fix e2e (#2366) * Change e2e reference name * Update openvino eval threshold for multiclass classification * Change comment message * Fix tiling e2e tests --------- Co-authored-by: GalyaZalesskaya <galina.zalesskaya@intel.com> * Add Dino head unit tests (#2344) Recover DINO head unit tests * Update for release 1.4.0rc2 (#2370) * update for release 1.4.0rc2 * Add skip mark for unstable unit tests --------- Co-authored-by: jaegukhyun <jaeguk.hyun@intel.com> * Fix NNCF training on CPU (#2373) * Align label order between Geti and OTX (#2369) * align label order * align with pre-commit * update CHANGELOG.md * deal with edge case * update type hint * Remove CenterCrop from Classification test pipeline and editing missing docs link (#2375) * Fix missing link for docs and removing centercrop for classification data pipeline * Revert the test threshold * Fix H-label classification (#2377) * Fix h-labelissue * Update unit tests * Make black happy * Fix unittests * Make black happy * Fix update heades information func * Update the logic: consider the loss per batch * Update for release 1.4 (#2380) * updated for 1.4.0rc3 * update changelog & release note * bump datumaro version up --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Switch to PTQ for sseg (#2374) * Switch to PTQ for sseg * Update log messages * Fix invalid import structures in otx.api (#2383) Update tiler.py * Update for 1.4.0rc4 (#2385) update for release 1.4.0rc4 * [release 1.4.0] XAI: Return saliency maps for Mask RCNN IR async infer (#2395) * Return saliency maps for openvino async infer * add workaround to fix yapf importing error --------- Co-authored-by: eunwoosh <eunwoo.shin@intel.com> * Update for release 1.4.0 (#2399) update version string Co-authored-by: Sungman Cho <sungman.cho@intel.com> * Fix broken links in documentation (#2405) * fix docs links to datumaro's docs * fix docs links to otx's docs * bump version to 1.4.1 * Update exportable code README (#2411) * Updated for release 1.4.1 (#2412) updated for release 1.4.1 * Add workaround for the incorrect meta info M-RCNN (used for XAI) (#2437) Add workaround for the incorrect mata info * Add model category attributes to model template (#2439) Add model category attributes to model template * Add model category & status fields in model template * Add is_default_for_task attr to model template * Update model templates with category attrs * Add integration tests for model templates consistency * Fix license & doc string * Fix typo * Refactor test cases * Refactor common tests by generator --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update for 1.4.2rc1 (#2441) update for release 1.4.2rc1 * Fix label list order for h-label classification (#2440) * Fix label list for h-label cls * Fix unit tests * Modified fq numbers for lite HRNET (#2445) modified fq numbers for lite HRNET * Update PTQ ignored scope for hrnet 18 mod2 (#2449) Update ptq ignored scope for hrnet 18 mod2 * Fix OpenVINO inference for legacy models (#2450) * bug fix for legacy openvino models * Add tests * Specific exceptions --------- * Update for 1.4.2rc2 (#2455) update for release 1.4.2rc2 * Prevent zero-sized saliency map in tiling if tile size is too big (#2452) * Prevent zero-sized saliency map in tiling if tile size is too big * Prevent zero-sized saliency in tiling (PyTorch) * Add unit tests for Tiler merge features methods --------- Co-authored-by: Galina <galina.zalesskaya@intel.com> * Update pot fq reference number (#2456) update pot fq reference number to 15 * Bump datumaro version to 1.5.0rc0 (#2470) bump datumaro version to 1.5.0rc0 * Set tox version constraint (#2472) set tox version constraint - tox-dev/tox#3110 * Bug fix for albumentations (#2467) * bug fix for legacy openvino models * Address albumentation issue --------- Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> * update for release 1.4.2rc3 * Add a dummy hierarchical config required by MAPI (#2483) * bump version to 1.4.2rc4 * Bump datumaro version (#2502) * bump datumaro version * remove deprecated/reomved attribute usage of the datumaro * Upgrade nncf version for 1.4 release (#2459) * Upgrade nncf version * Fix nncf interface warning * Set the exact nncf version * Update FQ refs after NNCF upgrade * Use NNCF from pypi * Update version for release 1.4.2rc5 (#2507) update version for release 1.4.2rc5 * Update for 1.4.2 (#2514) update for release 1.4.2 * create branch release/1.5.0 * Delete mem cache handler after training is done (#2535) release mem cache handler after training is done * Fix bug that auto batch size doesn't consider distributed training (#2533) * consider distributed training while searching batch size * update unit test * reveret gpu memory upper bound * fix typo * change allocated to reserved * add unit test for distributed training * align with pre-commit * Apply fix progress hook to release 1.5.0 (#2539) * Fix hook's ordering issue. AdaptiveRepeatHook changes the runner.max_iters before the ProgressHook * Change the expression * Fix typo * Fix multi-label, h-label issue * Fix auto_bs issue * Apply suggestions from code review Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Reflecting reviews * Refactor the name of get_data_cfg * Revert adaptive hook sampler init * Refactor the function name: get_data_cfg -> get_subset_data_cfg * Fix unit test errors * Remove adding AdaptiveRepeatDataHook for autobs * Remove unused import * Fix detection and segmentation case in Geti scenario --------- Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Re introduce adaptive scheduling for training (#2541) * Re-introduce adaptive patience for training * Revert unit tests * Update for release 1.4.3rc1 (#2542) * Mirror Anomaly ModelAPI changes (#2531) * Migrate anomaly exportable code to modelAPI (#2432) * Fix license in PR template * Migrate to modelAPI * Remove color conversion in streamer * Remove reverse_input_channels * Add float * Remove test as metadata is no longer used * Remove metadata from load method * remove anomalib openvino inferencer * fix signature * Support logacy OpenVINO model * Transform image * add configs * Re-introduce adaptive training (#2543) * Re-introduce adaptive patience for training * Revert unit tests * Fix auto input size mismatch in eval & export (#2530) * Fix auto input size mismatch in eval & export * Re-enable E2E tests for Issue#2518 * Add input size check in export testing * Format float numbers in log * Fix NNCF export shape mismatch * Fix saliency map issue * Disable auto input size if tiling enabled --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update ref. fq number for anomaly e2e2 (#2547) * Skip e2e det tests by issue2548 (#2550) * Add skip to chained TC for issue #2548 (#2552) * Update for release 1.4.3 (#2551) * Update MAPI for 1.5 release (#2555) Upgrade MAPI to v 0.1.6 (#2529) * Upgrade MAPI * Update exp code demo commit * Fix MAPI imports * Update ModelAPI configuration (#2564) * Update MAPI rt infor for detection * Upadte export info for cls, det and seg * Update unit tests * Disable QAT for SegNexts (#2565) * Disable NNCF QAT for SegNext * Del obsolete pot configs * Move NNCF skip marks to test commands to avoid duplication * Add Anomaly modelAPI changes to releases/1.4.0 (#2563) * bug fix for legacy openvino models * Apply otx anomaly 1.5 changes * Fix tests * Fix compression config * fix modelAPI imports * update integration tests * Edit config types * Update keys in deployed model --------- Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> Co-authored-by: Kim, Sungchul <sungchul.kim@intel.com> * Fix the CustomNonLinearClsHead when the batch_size is set to 1 (#2571) Fix bn1d issue Co-authored-by: sungmanc <sungmanc@intel.com> * Update ModelAPI configuration (#2564 from 1.4) (#2568) Update ModelAPI configuration (#2564) * Update MAPI rt infor for detection * Upadte export info for cls, det and seg * Update unit tests * Update for 1.4.4rc1 (#2572) * Hotfix DatasetEntity.get_combined_subset function loop (#2577) Fix get_combined_subset function * Revert default input size to `Default` due to YOLOX perf regression (#2580) Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix for the degradation issue of the classification task (#2585) * Revert to sync with 1.4.0 * Remove repeat data * Convert to the RGB value * Fix color conversion logic * Fix precommit * Bump datumaro version to 1.5.1rc3 (#2587) * Add label ids to anomaly OpenVINO model xml (#2590) * Add label ids to model xml --------- * Fix DeiT-Tiny model regression during class incremental training (#2594) * enable IBloss for DeiT-Tiny * update changelog * add docstring * Add label ids to model xml in release 1.5 (#2591) Add label ids to model xml * Fix DeiT-Tiny regression test for release/1.4.0 (#2595) * Fix DeiT regression test * update changelog * temp * Fix mmcls bug not wrapping model in DataParallel on CPUs (#2601) Wrap multi-label and h-label classification models by MMDataParallel in case of CPU training. --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix h-label loss normalization issue w/ exclusive label group of singe label (#2604) * Fix h-label loss normalization issue w/ exclusive label group with signle label * Fix non-linear version --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Boost up Image numpy accessing speed through PIL (#2586) * boost up numpy accessing speed through PIL * update CHANGELOG * resolve precommit error * resolve precommit error * add fallback logic with PIL open * use convert instead of draft * Add missing import pathlib for cls e2e testing (#2610) * Fix division by zero in class incremental learning for classification (#2606) * Add empty label to reproduce zero-division error Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix minor typo Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix empty label 4 -> 3 Signed-off-by: Songki Choi <songki.choi@intel.com> * Prevent division by zero Signed-off-by: Songki Choi <songki.choi@intel.com> * Update license Signed-off-by: Songki Choi <songki.choi@intel.com> * Update CHANGELOG.md Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix inefficient sampling Signed-off-by: Songki Choi <songki.choi@intel.com> * Revert indexing Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix minor typo Signed-off-by: Songki Choi <songki.choi@intel.com> --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Unify logger usage (#2612) * unify logger * align with pre-commit * unify anomaly logger to otx * change logger file path * align with pre-commit * change logger file path in missing file * configure logger after ConfigManager is initialized * configure logger when ConfigManager instance is initialized * update unit test code * move config_logger to each cli file * align with pre-commit * change part still using mmcv logger * Fix XAI algorithm for Detection (#2609) * Impove saliency maps algorithm for Detection * Remove extra changes * Update unit tests * Changes for 1 class * Fix pre-commit * Update CHANGELOG * Tighten dependency constraint only adapting latest patches (#2607) * tighten dependency constratint only adapting latest patches * adjust scikit-image version w.r.t python version * adjust tensorboard version w.r.t python version * remove version specifier for scikit-image * Add metadata to optimized model (#2618) * bug fix for legacy openvino models * Add metadata to optimized model * Revert formatting changes --------- Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> * modify omegaconf version constraint * [release 1.5.0] Fix XAI algorithm for Detection (#2617) Update detection XAI algorithm * Update dependency constraint (#2622) * Update tpp (#2621) * Fix h-label bug of missing parent labels in output (#2626) * Fix h-label bug of missing parent labels in output * Fix h-label test data label schema * Update CHANGELOG.md --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update publish workflow (#2625) update publish workflow to push whl to internal pypi * bump datumaro version to ~=1.5.0 * fixed mistake while mergeing back 1.4.4 * modifiy readme * remove openvino model wrapper class * remove openvino model wrapper tests * [release 1.5.0] DeiT: enable tests + add ViTFeatureVectorHook (#2630) Add ViT feature vector hook * Fix docs broken link to datatumaro_h-label Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix wrong label settings for non-anomaly task ModelAPIs Signed-off-by: Songki Choi <songki.choi@intel.com> * Update publish workflow for tag checking (#2632) * Update e2e tests for XAI Detection (#2634) Fix e2e XAI ref value * Disable QAT for newly added models (#2636) * Update release note and readme (#2637) * update release note and readme * remove package upload step on internal publish wf * update release note and, changelog, and readme * update version string to 1.6.0dev * fix datumaro version to 1.6.0rc0 * Mergeback 1.5.0 to develop (#2642) * Update publish workflow for tag checking (#2632) * Update e2e tests for XAI Detection (#2634) * Disable QAT for newly added models (#2636) * Update release note and readme (#2637) * remove package upload step on internal publish wf * update release note and, changelog, and readme * update version string to 1.6.0dev --------- Co-authored-by: Galina Zalesskaya <galina.zalesskaya@intel.com> Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> * Revert "Mergeback 1.5.0 to develop" (#2645) Revert "Mergeback 1.5.0 to develop (#2642)" This reverts commit 2f67686. * Add a tool to help conduct experiments (#2651) * implement run and experiment * implement experiment result aggregator * refactor experiment.py * refactor run.py * get export model speed * add var collumn * refactor experiment.py * refine a way to update argument in cmd * refine resource tracker * support anomaly on research framework * refine code aggregating exp result * bugfix * make other task available * eval task save avg_time_per_images as result * Add new argument to track CPU&GPU utilization and memory usage (#2500) * add argument to track resource usage * fix bug * fix a bug in a multi gpu case * use total cpu usage * add unit test * add mark to unit test * cover edge case * add pynvml in requirement * align with pre-commit * add license comment * update changelog * refine argument help * align with pre-commit * add version to requirement and raise an error if not supported values are given * apply new resource tracker format * refactor run.py * support optimize in research framework * cover edge case * Handle a case where fail cases exist * make argparse raise error rather than exit if problem exist * revert tensorboard aggregator * bugfix * save failed cases as yaml file * deal with integer in variables * add epoch to metric * use latest log.json file * align with otx logging method * move experiment.py from cli to tools * refactor experiment.py * merge otx run feature into experiment.py * move set_arguments_to_cmd definition into experiment.py * refactor experiment.py * bugfix * minor bugfix * use otx.cli instead of each otx entry * add feature to parse single workspace * add comments * fix bugs * align with pre-commit * revert parser argument * align with pre-commit * Make `max_num_detections` configurable (#2647) * Make max_num_detections configurable * Fix RCNN case with integration test * Apply max_num_detections to train_cfg, too --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Revert inference batch size to 1 for instance segmentation (#2648) Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix CPU training issue on non-CUDA system (#2655) Fix bug that auto adaptive batch size raises an error if CUDA isn't available (#2410) --------- Co-authored-by: Sungman Cho <sungman.cho@intel.com> Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Remove unnecessary log while building a model (#2658) * revert logger in otx/algorithms/detection/adapters/mmdet/utils/builder.py * revert logger in otx/algorithms/classification/adapters/mmcls/utils/builder.py * make change more readable * Fix a minor bug of experiment.py (#2662) fix bug * Not check avg_time_per_image during test (#2665) * ignore avg_time_per_image during test * do not call stdev when length of array is less than 2 * ignore avg_time_per_image during regerssion test * Update docs for enabling sphinx.ext.autosummary (#2654) * fix some errors/warnings on docs source * enable sphinx-autosummary for API reference documentation * Update Makefile * update sphinx configuration * Update PTQ docs (#2672) * Replace POT -> PTQ * Fixes from comments * Update regression tests for develop (#2652) * Update regression tests (#2556) * update reg tests * update test suit * update regression criteria --------- Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Exclude py37 target config for cibuildwheel (#2673) * Add `--dryrun` option to tools/experiment.py (#2674) * Fix variable override bug * Add --dryrun option to see experiment list --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update OTX explain CLI arguments (#2671) * Change int8 to uint8 to XAI tests * Add probabilities for CLI demo * Rename arguments for explain * Fix pre-commit * Remove extra changes * Fix integration tests * Fix integration "explain_all_classes" test for OV * Fix e2e tests for explain (#2681) * Add README.md for experiment.py (#2688) * write draft readme * refine readme * align with pre-commit * Fix typo in reg test cmd (#2691) * Select more proper model weight file according to commands run just before (#2696) * consider more complex case when prepare eval and optimize * update readme * align with pre-commit * add comment * Add visual prompting zero-shot learning (`learn` & `infer`) (#2616) * Add algobackend & temp configs * Update config * WIP * Fix to enable `algo_backend` * (WIP) Update dataset * (WIP) Update configs * (WIP) Update tasks * (WIP) Update models * Enable `learn` task through otx.train * (WIP) enable `infer` (TODO : normalize points) * Fix when `state_dict` is None * Enable `ZeroShotInferenceCallback` * Enable otx infer * Enable to independently use processor * Revert max_steps * Change `postprocess_masks` to `staticmethod` * Add `PromptGetter` & Enable `learn` and `infer` * precommit * Fix args * Fix typo * Change `id` to `id_` * Fix import * Fix args * precommit * (WIP) Add unit tests * Fix * Add unit tests * Fix * Add integration tests * precommit * Update CHANGELOG.md * Update docstring and type annotations * Fix * precommit * Fix unused args * precommit * Fix * Fix unsupported dtype in ov graph constant converter (#2676) * Fix unsupported dtype in ov graph constant converter * Fix more ov-graph related unit tests * Skip failure TC with adding issue number ref. (#2717) * Fix visual prompting e2e test (#2719) Skip zero-shot e2e * Remove duplicated variable combination in experiment.py (#2713) * Enhance detection & instance segmentation experiment (#2710) * Compute precision and recall along with f-measure * Log performance * Accept ellipse annotation from datumaro format * Fix dataset adapter condition for det/iset * Insert garbage collection btw experiments --------- Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com> Signed-off-by: Songki Choi <songki.choi@intel.com> Co-authored-by: Yunchu Lee <yunchu.lee@intel.com> Co-authored-by: Kim, Sungchul <sungchul.kim@intel.com> Co-authored-by: Vinnam Kim <vinnam.kim@intel.com> Co-authored-by: Evgeny Tsykunov <evgeny.tsykunov@intel.com> Co-authored-by: Songki Choi <songki.choi@intel.com> Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> Co-authored-by: Sungman Cho <sungman.cho@intel.com> Co-authored-by: Eugene Liu <eugene.liu@intel.com> Co-authored-by: Wonju Lee <wonju.lee@intel.com> Co-authored-by: Dick Ameln <dick.ameln@intel.com> Co-authored-by: Vladislav Sovrasov <sovrasov.vlad@gmail.com> Co-authored-by: sungchul.kim <sungchul@ikvensx010> Co-authored-by: GalyaZalesskaya <galina.zalesskaya@intel.com> Co-authored-by: Harim Kang <harim.kang@intel.com> Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com> Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> Co-authored-by: sungmanc <sungmanc@intel.com>
1 parent 71e9adc commit 927fab6

File tree

185 files changed

+4427
-3762
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

185 files changed

+4427
-3762
lines changed

.github/workflows/run_tests_in_tox.yml

+1-2
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ jobs:
5656
name: ${{ inputs.artifact-prefix }}-${{ inputs.toxenv-task }}-${{ inputs.toxenv-pyver }}-${{ inputs.toxenv-ptver }}
5757
path: |
5858
.tox/tests-${{ inputs.toxenv-task }}-${{ inputs.toxenv-pyver }}-${{ inputs.toxenv-ptver }}.csv
59-
.tox/tests-reg_${{ inputs.task }}_*.csv
60-
.tox/tests-reg_tiling_${{ inputs.task }}_*.csv
59+
.tox/tests-reg_${{ inputs.task }}*.csv
6160
# Use always() to always run this step to publish test results when there are test failures
6261
if: ${{ inputs.upload-artifact && always() }}

.github/workflows/weekly.yml

+1-8
Original file line numberDiff line numberDiff line change
@@ -14,31 +14,24 @@ jobs:
1414
include:
1515
- toxenv_task: "iseg"
1616
test_dir: "tests/regression/instance_segmentation/test_instance_segmentation.py"
17-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
1817
task: "instance_segmentation"
1918
- toxenv_task: "iseg_t"
2019
test_dir: "tests/regression/instance_segmentation/test_tiling_instance_segmentation.py"
21-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
2220
task: "instance_segmentation"
2321
- toxenv_task: "seg"
2422
test_dir: "tests/regression/semantic_segmentation"
25-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
2623
task: "segmentation"
2724
- toxenv_task: "det"
2825
test_dir: "tests/regression/detection"
29-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
3026
task: "detection"
3127
- toxenv_task: "ano"
3228
test_dir: "tests/regression/anomaly"
33-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
3429
task: "anomaly"
3530
- toxenv_task: "act"
3631
test_dir: "tests/regression/action"
37-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
3832
task: "action"
3933
- toxenv_task: "cls"
4034
test_dir: "tests/regression/classification"
41-
runs_on: "['self-hosted', 'Linux', 'X64', 'dmount']"
4235
task: "classification"
4336
name: Regression-Test-py310-${{ matrix.toxenv_task }}
4437
uses: ./.github/workflows/run_tests_in_tox.yml
@@ -47,7 +40,7 @@ jobs:
4740
toxenv-pyver: "py310"
4841
toxenv-task: ${{ matrix.toxenv_task }}
4942
tests-dir: ${{ matrix.test_dir }}
50-
runs-on: ${{ matrix.runs_on }}
43+
runs-on: "['self-hosted', 'Linux', 'X64', 'dmount']"
5144
task: ${{ matrix.task }}
5245
timeout-minutes: 8640
5346
upload-artifact: true

.gitignore

+4
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ results/
1818
build/
1919
dist/
2020
!src/otx/recipes/**
21+
src/otx/recipes/**/__pycache__
2122
*egg-info
2223

2324
*.pth
@@ -45,3 +46,6 @@ src/**/*.so
4546

4647
# Dataset made by unit-test
4748
tests/**/detcon_mask/*
49+
50+
# sphinx-autosummary generated files
51+
docs/**/_autosummary/

CHANGELOG.md

+5
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,10 @@ All notable changes to this project will be documented in this file.
44

55
## \[unreleased\]
66

7+
### New features
8+
9+
- Add zero-shot visual prompting (https://github.com/openvinotoolkit/training_extensions/pull/2616)
10+
711
## \[v1.5.0\]
812

913
### New features
@@ -46,6 +50,7 @@ All notable changes to this project will be documented in this file.
4650
- Update ModelAPI configuration(<https://github.com/openvinotoolkit/training_extensions/pull/2564>)
4751
- Add Anomaly modelAPI changes (<https://github.com/openvinotoolkit/training_extensions/pull/2563>)
4852
- Update Image numpy access (<https://github.com/openvinotoolkit/training_extensions/pull/2586>)
53+
- Make max_num_detections configurable (<https://github.com/openvinotoolkit/training_extensions/pull/2647>)
4954

5055
### Bug fixes
5156

docs/Makefile

+6
Original file line numberDiff line numberDiff line change
@@ -23,3 +23,9 @@ html:
2323
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
2424
%: Makefile
2525
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
26+
27+
# Custom clean target that also removes autosummary generated files. Can
28+
# be removed when https://github.com/sphinx-doc/sphinx/issues/1999 is fixed.
29+
clean:
30+
rm -rf "$(SOURCEDIR)/guide/reference/_autosummary"
31+
$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

docs/source/conf.py

+37-1
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,23 @@
3333
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
3434
# ones.
3535
extensions = [
36+
"sphinx.ext.napoleon", # Support for NumPy and Google style docstrings
3637
'sphinx.ext.autodoc',
3738
'sphinx_copybutton',
39+
"sphinx.ext.autosummary", # Create neat summary tables
40+
"sphinx.ext.viewcode", # Find the source files
41+
"sphinx.ext.autosectionlabel", # Refer sections its title
42+
"sphinx.ext.intersphinx", # Generate links to the documentation
43+
]
44+
45+
source_suffix = {
46+
".rst": "restructuredtext",
47+
".md": "markdown",
48+
}
49+
50+
suppress_warnings = [
51+
"ref.python",
52+
"autosectionlabel.*",
3853
]
3954

4055
# Add any paths that contain templates here, relative to this directory.
@@ -45,7 +60,6 @@
4560
# This pattern also affects html_static_path and html_extra_path.
4661
exclude_patterns = []
4762

48-
4963
# -- Options for HTML output ------------------------------------------------- #
5064
# The theme to use for HTML and HTML Help pages. See the documentation for
5165
# a list of builtin themes.
@@ -74,3 +88,25 @@
7488
html_css_files = [
7589
'css/custom.css',
7690
]
91+
92+
# -- Extension configuration -------------------------------------------------
93+
autodoc_docstring_signature = True
94+
autodoc_member_order = "bysource"
95+
intersphinx_mapping = {
96+
"python": ("https://docs.python.org/3", None),
97+
"numpy": ("https://numpy.org/doc/stable/", None),
98+
}
99+
autodoc_member_order = "groupwise"
100+
autodoc_default_options = {
101+
"members": True,
102+
"methods": True,
103+
"special-members": "__call__",
104+
"exclude-members": "_abc_impl",
105+
"show-inheritance": True,
106+
}
107+
108+
autoclass_content = "both"
109+
110+
autosummary_generate = True # Turn on sphinx.ext.autosummary
111+
autosummary_ignore_module_all = False # Summary list in __all__ no others
112+
# autosummary_imported_members = True # document classes and functions imported in modules
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
Models Optimization
22
===================
33

4-
OpenVINO™ Training Extensions provides two types of optimization algorithms: `Post-training Optimization Tool (POT) <https://docs.openvino.ai/latest/pot_introduction.html#doxid-pot-introduction>`_ and `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`_.
4+
OpenVINO™ Training Extensions provides two types of optimization algorithms: `Post-Training Quantization tool (PTQ) <https://github.com/openvinotoolkit/nncf#post-training-quantization>`_ and `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`_.
55

66
*******************************
7-
Post-training Optimization Tool
7+
Post-Training Quantization Tool
88
*******************************
99

10-
POT is designed to optimize the inference of models by applying post-training methods that do not require model retraining or fine-tuning. If you want to know more details about how POT works and to be more familiar with model optimization methods, please refer to `documentation <https://docs.openvino.ai/latest/pot_introduction.html#doxid-pot-introduction>`_.
10+
PTQ is designed to optimize the inference of models by applying post-training methods that do not require model retraining or fine-tuning. If you want to know more details about how PTQ works and to be more familiar with model optimization methods, please refer to `documentation <https://docs.openvino.ai/2023.2/ptq_introduction.html>`_.
1111

12-
To run Post-training optimization it is required to convert the model to OpenVINO™ intermediate representation (IR) first. To perform fast and accurate quantization we use ``DefaultQuantization Algorithm`` for each task. Please, see the `DefaultQuantization Parameters <https://docs.openvino.ai/latest/pot_compression_algorithms_quantization_default_README.html#doxid-pot-compression-algorithms-quantization-default-r-e-a-d-m-e>`_ for further information about configuring the optimization.
12+
To run Post-training quantization it is required to convert the model to OpenVINO™ intermediate representation (IR) first. To perform fast and accurate quantization we use ``DefaultQuantization Algorithm`` for each task. Please, refer to the `Tune quantization Parameters <https://docs.openvino.ai/2023.2/basic_quantization_flow.html#tune-quantization-parameters>`_ for further information about configuring the optimization.
1313

14-
POT parameters can be found and configured in ``template.yaml`` and ``configuration.yaml`` for each task. For Anomaly and Semantic Segmentation tasks, we have separate configuration files for POT, that can be found in the same directory with ``template.yaml``, for example for `PaDiM <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/anomaly/configs/classification/padim/ptq_optimization_config.py>`_, `OCR-Lite-HRNe-18-mod2 <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/ptq_optimization_config.py>`_ model.
14+
PTQ parameters can be found and configured in ``template.yaml`` and ``configuration.yaml`` for each task. For Anomaly and Semantic Segmentation tasks, we have separate configuration files for PTQ, that can be found in the same directory with ``template.yaml``, for example for `PaDiM <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/anomaly/configs/classification/padim/ptq_optimization_config.py>`_, `OCR-Lite-HRNe-18-mod2 <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/ptq_optimization_config.py>`_ model.
1515

1616
************************************
1717
Neural Network Compression Framework
@@ -25,9 +25,9 @@ You can refer to configuration files for default templates for each task accordi
2525

2626
NNCF tends to provide better quality in terms of preserving accuracy as it uses training compression approaches.
2727
Compression results achievable with the NNCF can be found `here <https://github.com/openvinotoolkit/nncf#nncf-compressed-model-zoo>`_ .
28-
Meanwhile, the POT is faster but can degrade accuracy more than the training-enabled approach.
28+
Meanwhile, the PTQ is faster but can degrade accuracy more than the training-enabled approach.
2929

3030
.. note::
3131
The main recommendation is to start with post-training compression and use NNCF compression during training if you are not satisfied with the results.
3232

33-
Please, refer to our :doc:`dedicated tutorials <../../tutorials/base/how_to_train/index>` on how to optimize your model using POT or NNCF.
33+
Please, refer to our :doc:`dedicated tutorials <../../tutorials/base/how_to_train/index>` on how to optimize your model using PTQ or NNCF.

docs/source/guide/explanation/algorithms/action/action_classification.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
Action Classification
2-
==================
2+
=====================
33

44
Action classification is a problem of identifying the action that is being performed in a video. The input to the algorithm is a sequence of video frames, and the output is a label indicating the action that is being performed.
55

docs/source/guide/explanation/algorithms/anomaly/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ Since STFPM trains the student network, we use the following parameters for its
143143
- ``Aditional Techniques``:
144144
- ``Early Stopping``: Early stopping is used to stop the training process when the validation loss stops improving. The default value of the early stopping patience is ``10``.
145145

146-
For more information on STFPM's training. We invite you to read Anomalib's `STFPM documentation<https://openvinotoolkit.github.io/anomalib/reference_guide/algorithms/stfpm.html>`_.
146+
For more information on STFPM's training. We invite you to read Anomalib's `STFPM documentation <https://anomalib.readthedocs.io/en/latest/reference_guide/algorithms/stfpm.htm>`_.
147147

148148
Reconstruction-based Models
149149
---------------------------

docs/source/guide/explanation/algorithms/classification/multi_class_classification.rst

+1-2
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ In the table below the top-1 accuracy on some academic datasets using our :ref:`
100100
+-----------------------+-----------------+-----------+-----------+-----------+
101101
| EfficientNet-V2-S | 96.13 | 90.36 | 97.68 | 86.74 |
102102
+-----------------------+-----------------+-----------+-----------+-----------+
103-
*These datasets were splitted with auto-split (80% train, 20% test).
103+
\* These datasets were splitted with auto-split (80% train, 20% test).
104104

105105
************************
106106
Semi-supervised Learning
@@ -145,7 +145,6 @@ In the table below the top-1 accuracy on some academic datasets using our pipeli
145145
| EfficientNet-V2-S | 36.03 | 39.66 | 16.81 | 20.28 | 65.99 | 69.61 |
146146
+-----------------------+---------+---------+-------+---------+--------+---------+
147147

148-
|
149148

150149
- 10 labeled images per class including unlabeled dataset for Semi-SL
151150

docs/source/guide/explanation/algorithms/classification/multi_label_classification.rst

+1
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ Specifically, this format should be converted in our `internal representation <h
2828
To convert the COCO data format to our internal one, run this script in similar way:
2929

3030
.. code-block::
31+
3132
python convert_coco_to_multilabel.py --ann_file_path <path to .json COCO annotations> --data_root_dir <path to images folder> --output <output path to save annotations>
3233
3334
.. note::

docs/source/guide/explanation/algorithms/index.rst

+2
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,9 @@ To this end, we support:
1111
- **Supervised training**. This is the most common approach for computer vision tasks such as object detection and image classification. Supervised learning involves training a model on a labeled dataset of images. The model learns to associate specific features in the images with the corresponding labels.
1212

1313
- **Incremental learning**. This learning approach lets the model train on new data as it becomes available, rather than retraining the entire model on the whole dataset every time new data is added. OpenVINO™ Training Extensions supports also the class incremental approach for all tasks. In this approach, the model is first trained on a set of classes, and then incrementally updated with new classes of data, while keeping the previously learned classes' knowledge. The class incremental approach is particularly useful in situations where the number of classes is not fixed and new classes may be added over time.
14+
1415
.. _semi_sl_explanation:
16+
1517
- **Semi-supervised learning**. This is a type of machine learning in which the model is trained on a dataset that contains a combination of labeled and unlabeled examples. The labeled examples are used to train the model, while the unlabeled examples are used to improve the model's performance by providing additional information about the underlying distribution of the data. This approach is often used when there is a limited amount of labeled data available, but a large amount of unlabeled data. This can make it more cost-effective and efficient to train models compared to traditional supervised learning, where the model is trained only on labeled data.
1618

1719
- **Self-supervised learning**. This is a type of machine learning where the model is trained on a dataset that contains only unlabeled examples. The model is trained to learn useful representations of the data by solving a task that can be inferred from the input itself, without human-provided labels. The objective is to learn good representations of the input data that can then be used for downstream tasks such as classification, detection, generation or clustering.

docs/source/guide/explanation/algorithms/object_detection/object_detection.rst

+2
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,7 @@ We support the following ready-to-use model templates:
9292
Above table can be found using the following command
9393

9494
.. code-block::
95+
9596
$ otx find --task detection
9697
9798
`MobileNetV2-ATSS <https://arxiv.org/abs/1912.02424>`_ is a good medium-range model that works well and fast in most cases.
@@ -147,6 +148,7 @@ Please, refer to the :doc:`tutorial <../../../tutorials/advanced/backbones>` how
147148
To see which public backbones are available for the task, the following command can be executed:
148149

149150
.. code-block::
151+
150152
$ otx find --backbone {torchvision, pytorchcv, mmcls, omz.mmcls}
151153
152154
In the table below the test mAP on some academic datasets using our :ref:`supervised pipeline <od_supervised_pipeline>` is presented.

docs/source/guide/explanation/algorithms/segmentation/instance_segmentation.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -61,11 +61,11 @@ We support the following ready-to-use model templates:
6161
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------------+-----------------+
6262
| Template ID | Name | Complexity (GFLOPs) | Model size (MB) |
6363
+============================================================================================================================================================================================================================================+============================+=====================+=================+
64-
| `Custom_Counting_Instance_Segmentation_MaskRCNN_EfficientNetB2B <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/instance_segmentation/efficientnetb2b_maskrcnn/template.yaml>`_ | MaskRCNN-EfficientNetB2B | 68.48 | 13.27 |
64+
| `Custom_Counting_Instance_Segmentation_MaskRCNN_EfficientNetB2B <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/instance_segmentation/efficientnetb2b_maskrcnn/template.yaml>`_ | MaskRCNN-EfficientNetB2B | 68.48 | 13.27 |
6565
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------------+-----------------+
66-
| `Custom_Counting_Instance_Segmentation_MaskRCNN_ResNet50 <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/instance_segmentation/resnet50_maskrcnn/template.yaml>`_ | MaskRCNN-ResNet50 | 533.80 | 177.90 |
66+
| `Custom_Counting_Instance_Segmentation_MaskRCNN_ResNet50 <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/instance_segmentation/resnet50_maskrcnn/template.yaml>`_ | MaskRCNN-ResNet50 | 533.80 | 177.90 |
6767
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------------+-----------------+
68-
| `Custom_Counting_Instance_Segmentation_MaskRCNN_ConvNeXt <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/instance_segmentation/convnext_maskrcnn/template.yaml>`_ | MaskRCNN-ConvNeXt | 266.78 | 192.4 |
68+
| `Custom_Counting_Instance_Segmentation_MaskRCNN_ConvNeXt <https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/instance_segmentation/convnext_maskrcnn/template.yaml>`_ | MaskRCNN-ConvNeXt | 266.78 | 192.4 |
6969
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------------+-----------------+
7070

7171
MaskRCNN-ResNet50 utilizes the `ResNet-50 <https://arxiv.org/abs/1512.03385>`_ architecture as the backbone network for extracting image features. This choice of backbone network results in a higher number of parameters and FLOPs, which consequently requires more training time. However, the model offers superior performance in terms of accuracy.

0 commit comments

Comments
 (0)