Skip to content

Commit 6914551

Browse files
kprokofiyunchusungchul2vinnamkimEvgeny Tsykunov
authored
Merge develop 2 (#2936)
* Update base.txt updated dependency version of datumaro * Update __init__.py update version string * Update requirements.txt * Temporarily skip visual prompting openvino integration test (#2323) * Fix import dm.DatasetSubset (#2324) Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com> * Fix semantic segmentation soft prediction dtype (#2322) * Fix semantic segmentation soft prediction dtype * relax ref sal vals check --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Contrain yapf verison lesser than 0.40.0 (#2328) contrain_yapf_version * Fix detection e2e tests (#2327) Fix for detection * Mergeback: Label addtion/deletion 1.2.4 --> 1.4.0 (#2326) * Make black happy * Fix conflicts * Merge-back: add test datasets and edit the test code * Make black happy * Fix mis-merge * Make balck happy * Fix typo * Fix typoi --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Bump datumaro up to 1.4.0rc2 (#2332) bump datumaro up to 1.4.0rc2 * Tiling Doc for releases 1.4.0 (#2333) * Add tiling documentation * Bump otx version to 1.4.0rc2 (#2341) * OTX deploy for visual prompting task (#2311) * Enable `otx deploy` * (WIP) integration test * Docstring * Update args for create_model * Manually set image embedding layout * Enable to use model api for preprocessing - `fit_to_window` doesn't work expectedly, so newly implemented `VisualPromptingOpenvinoAdapter` to use new resize function * Remove skipped test * Updated * Update unit tests on model wrappers * Update * Update configuration * Fix not to patch pretrained path * pylint & update model api version in docstring --------- Co-authored-by: Wonju Lee <wonju.lee@intel.com> * Bump albumentations version in anomaly requirements (#2350) increment albumentations version * Update action detection (#2346) * Remove skip mark for PTQ test of action detection * Update action detection documentation * Fix e2e (#2348) * Change classification dataset from dummy to toy * Revert test changes * Change label name for multilabel dataset * Revert e2e test changes * Change ov test cases' threshold * Add parent's label * Update ModelAPI in 1.4 release (#2347) * Upgrade model API * Update otx in exportable code * Fix unit tests * Fix black * Fix detection inference * Fix det tiling * Fix mypy * Fix demo * Fix visualizer in demo * Fix black * Add OTX optimize for visual prompting task (#2318) * Initial commit * Update block * (WIP) otx optimize * Fix * WIP * Update configs & exported outputs * Remove unused modules for torch * Add unit tests * pre-commit * Update CHANGELOG * Update detection docs (#2335) * Update detection docs * Revert template id changes * Fix wrong template id * Update docs/source/guide/explanation/algorithms/object_detection/object_detection.rst Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Update docs/source/guide/explanation/algorithms/object_detection/object_detection.rst Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> --------- Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Add visual prompting documentation (#2354) * (WIP) write docs * Add visual prompting documentation * Update CHANGELOG --------- Co-authored-by: sungchul.kim <sungchul@ikvensx010> * Remove custom modelapi patch in visual prompting (#2359) * Remove custom modelapi patch * Update test * Fix graph metric order and label issues (#2356) * Fix graph metric going backward issue * Add license notice * Fix pre-commit issue * Add rename items & logic for metric --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update multi-label document and conversion script (#2358) Update docs, label convert script * Update third party programs (#2365) * Make anomaly task compatible with older albumentations versions (#2363) * fix transforms export in metadata * wrap transform dict * add todo for updating to_dict call * Fixing detection saliency map for one class case (#2368) * fix softmax * fix validity tests * Add e2e test for visual prompting (#2360) * (WIP) otx optimize * pre-commit * (WIP) set e2e * Remove nncf config * Add visual prompting requirement * Add visual prompting in tox * Add visual prompting in setup.py * Fix typo * Delete unused configuration.yaml * Edit test_name * Add to limit activation range * Update from `vp` to `visprompt` * Fix about no returning the first label * pre-commit * (WIP) otx optimize * pre-commit * (WIP) set e2e * Remove nncf config * Add visual prompting requirement * Add visual prompting in tox * Add visual prompting in setup.py * Fix typo * pre-commit * Add actions * Update tests/e2e/cli/visual_prompting/test_visual_prompting.py Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> * Skip PTQ e2e test * Change task name * Remove skipped tc --------- Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> * Fix e2e (#2366) * Change e2e reference name * Update openvino eval threshold for multiclass classification * Change comment message * Fix tiling e2e tests --------- Co-authored-by: GalyaZalesskaya <galina.zalesskaya@intel.com> * Add Dino head unit tests (#2344) Recover DINO head unit tests * Update for release 1.4.0rc2 (#2370) * update for release 1.4.0rc2 * Add skip mark for unstable unit tests --------- Co-authored-by: jaegukhyun <jaeguk.hyun@intel.com> * Fix NNCF training on CPU (#2373) * Align label order between Geti and OTX (#2369) * align label order * align with pre-commit * update CHANGELOG.md * deal with edge case * update type hint * Remove CenterCrop from Classification test pipeline and editing missing docs link (#2375) * Fix missing link for docs and removing centercrop for classification data pipeline * Revert the test threshold * Fix H-label classification (#2377) * Fix h-labelissue * Update unit tests * Make black happy * Fix unittests * Make black happy * Fix update heades information func * Update the logic: consider the loss per batch * Update for release 1.4 (#2380) * updated for 1.4.0rc3 * update changelog & release note * bump datumaro version up --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Switch to PTQ for sseg (#2374) * Switch to PTQ for sseg * Update log messages * Fix invalid import structures in otx.api (#2383) Update tiler.py * Update for 1.4.0rc4 (#2385) update for release 1.4.0rc4 * [release 1.4.0] XAI: Return saliency maps for Mask RCNN IR async infer (#2395) * Return saliency maps for openvino async infer * add workaround to fix yapf importing error --------- Co-authored-by: eunwoosh <eunwoo.shin@intel.com> * Update for release 1.4.0 (#2399) update version string Co-authored-by: Sungman Cho <sungman.cho@intel.com> * Fix broken links in documentation (#2405) * fix docs links to datumaro's docs * fix docs links to otx's docs * bump version to 1.4.1 * Update exportable code README (#2411) * Updated for release 1.4.1 (#2412) updated for release 1.4.1 * Add workaround for the incorrect meta info M-RCNN (used for XAI) (#2437) Add workaround for the incorrect mata info * Add model category attributes to model template (#2439) Add model category attributes to model template * Add model category & status fields in model template * Add is_default_for_task attr to model template * Update model templates with category attrs * Add integration tests for model templates consistency * Fix license & doc string * Fix typo * Refactor test cases * Refactor common tests by generator --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update for 1.4.2rc1 (#2441) update for release 1.4.2rc1 * Fix label list order for h-label classification (#2440) * Fix label list for h-label cls * Fix unit tests * Modified fq numbers for lite HRNET (#2445) modified fq numbers for lite HRNET * Update PTQ ignored scope for hrnet 18 mod2 (#2449) Update ptq ignored scope for hrnet 18 mod2 * Fix OpenVINO inference for legacy models (#2450) * bug fix for legacy openvino models * Add tests * Specific exceptions --------- * Update for 1.4.2rc2 (#2455) update for release 1.4.2rc2 * Prevent zero-sized saliency map in tiling if tile size is too big (#2452) * Prevent zero-sized saliency map in tiling if tile size is too big * Prevent zero-sized saliency in tiling (PyTorch) * Add unit tests for Tiler merge features methods --------- Co-authored-by: Galina <galina.zalesskaya@intel.com> * Update pot fq reference number (#2456) update pot fq reference number to 15 * Bump datumaro version to 1.5.0rc0 (#2470) bump datumaro version to 1.5.0rc0 * Set tox version constraint (#2472) set tox version constraint - tox-dev/tox#3110 * Bug fix for albumentations (#2467) * bug fix for legacy openvino models * Address albumentation issue --------- Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> * update for release 1.4.2rc3 * Add a dummy hierarchical config required by MAPI (#2483) * bump version to 1.4.2rc4 * Bump datumaro version (#2502) * bump datumaro version * remove deprecated/reomved attribute usage of the datumaro * Upgrade nncf version for 1.4 release (#2459) * Upgrade nncf version * Fix nncf interface warning * Set the exact nncf version * Update FQ refs after NNCF upgrade * Use NNCF from pypi * Update version for release 1.4.2rc5 (#2507) update version for release 1.4.2rc5 * Update for 1.4.2 (#2514) update for release 1.4.2 * create branch release/1.5.0 * Delete mem cache handler after training is done (#2535) release mem cache handler after training is done * Fix bug that auto batch size doesn't consider distributed training (#2533) * consider distributed training while searching batch size * update unit test * reveret gpu memory upper bound * fix typo * change allocated to reserved * add unit test for distributed training * align with pre-commit * Apply fix progress hook to release 1.5.0 (#2539) * Fix hook's ordering issue. AdaptiveRepeatHook changes the runner.max_iters before the ProgressHook * Change the expression * Fix typo * Fix multi-label, h-label issue * Fix auto_bs issue * Apply suggestions from code review Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Reflecting reviews * Refactor the name of get_data_cfg * Revert adaptive hook sampler init * Refactor the function name: get_data_cfg -> get_subset_data_cfg * Fix unit test errors * Remove adding AdaptiveRepeatDataHook for autobs * Remove unused import * Fix detection and segmentation case in Geti scenario --------- Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Re introduce adaptive scheduling for training (#2541) * Re-introduce adaptive patience for training * Revert unit tests * Update for release 1.4.3rc1 (#2542) * Mirror Anomaly ModelAPI changes (#2531) * Migrate anomaly exportable code to modelAPI (#2432) * Fix license in PR template * Migrate to modelAPI * Remove color conversion in streamer * Remove reverse_input_channels * Add float * Remove test as metadata is no longer used * Remove metadata from load method * remove anomalib openvino inferencer * fix signature * Support logacy OpenVINO model * Transform image * add configs * Re-introduce adaptive training (#2543) * Re-introduce adaptive patience for training * Revert unit tests * Fix auto input size mismatch in eval & export (#2530) * Fix auto input size mismatch in eval & export * Re-enable E2E tests for Issue#2518 * Add input size check in export testing * Format float numbers in log * Fix NNCF export shape mismatch * Fix saliency map issue * Disable auto input size if tiling enabled --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update ref. fq number for anomaly e2e2 (#2547) * Skip e2e det tests by issue2548 (#2550) * Add skip to chained TC for issue #2548 (#2552) * Update for release 1.4.3 (#2551) * Update MAPI for 1.5 release (#2555) Upgrade MAPI to v 0.1.6 (#2529) * Upgrade MAPI * Update exp code demo commit * Fix MAPI imports * Update ModelAPI configuration (#2564) * Update MAPI rt infor for detection * Upadte export info for cls, det and seg * Update unit tests * Disable QAT for SegNexts (#2565) * Disable NNCF QAT for SegNext * Del obsolete pot configs * Move NNCF skip marks to test commands to avoid duplication * Add Anomaly modelAPI changes to releases/1.4.0 (#2563) * bug fix for legacy openvino models * Apply otx anomaly 1.5 changes * Fix tests * Fix compression config * fix modelAPI imports * update integration tests * Edit config types * Update keys in deployed model --------- Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> Co-authored-by: Kim, Sungchul <sungchul.kim@intel.com> * Fix the CustomNonLinearClsHead when the batch_size is set to 1 (#2571) Fix bn1d issue Co-authored-by: sungmanc <sungmanc@intel.com> * Update ModelAPI configuration (#2564 from 1.4) (#2568) Update ModelAPI configuration (#2564) * Update MAPI rt infor for detection * Upadte export info for cls, det and seg * Update unit tests * Update for 1.4.4rc1 (#2572) * Hotfix DatasetEntity.get_combined_subset function loop (#2577) Fix get_combined_subset function * Revert default input size to `Default` due to YOLOX perf regression (#2580) Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix for the degradation issue of the classification task (#2585) * Revert to sync with 1.4.0 * Remove repeat data * Convert to the RGB value * Fix color conversion logic * Fix precommit * Bump datumaro version to 1.5.1rc3 (#2587) * Add label ids to anomaly OpenVINO model xml (#2590) * Add label ids to model xml --------- * Fix DeiT-Tiny model regression during class incremental training (#2594) * enable IBloss for DeiT-Tiny * update changelog * add docstring * Add label ids to model xml in release 1.5 (#2591) Add label ids to model xml * Fix DeiT-Tiny regression test for release/1.4.0 (#2595) * Fix DeiT regression test * update changelog * temp * Fix mmcls bug not wrapping model in DataParallel on CPUs (#2601) Wrap multi-label and h-label classification models by MMDataParallel in case of CPU training. --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix h-label loss normalization issue w/ exclusive label group of singe label (#2604) * Fix h-label loss normalization issue w/ exclusive label group with signle label * Fix non-linear version --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Boost up Image numpy accessing speed through PIL (#2586) * boost up numpy accessing speed through PIL * update CHANGELOG * resolve precommit error * resolve precommit error * add fallback logic with PIL open * use convert instead of draft * Add missing import pathlib for cls e2e testing (#2610) * Fix division by zero in class incremental learning for classification (#2606) * Add empty label to reproduce zero-division error Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix minor typo Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix empty label 4 -> 3 Signed-off-by: Songki Choi <songki.choi@intel.com> * Prevent division by zero Signed-off-by: Songki Choi <songki.choi@intel.com> * Update license Signed-off-by: Songki Choi <songki.choi@intel.com> * Update CHANGELOG.md Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix inefficient sampling Signed-off-by: Songki Choi <songki.choi@intel.com> * Revert indexing Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix minor typo Signed-off-by: Songki Choi <songki.choi@intel.com> --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Unify logger usage (#2612) * unify logger * align with pre-commit * unify anomaly logger to otx * change logger file path * align with pre-commit * change logger file path in missing file * configure logger after ConfigManager is initialized * configure logger when ConfigManager instance is initialized * update unit test code * move config_logger to each cli file * align with pre-commit * change part still using mmcv logger * Fix XAI algorithm for Detection (#2609) * Impove saliency maps algorithm for Detection * Remove extra changes * Update unit tests * Changes for 1 class * Fix pre-commit * Update CHANGELOG * Tighten dependency constraint only adapting latest patches (#2607) * tighten dependency constratint only adapting latest patches * adjust scikit-image version w.r.t python version * adjust tensorboard version w.r.t python version * remove version specifier for scikit-image * Add metadata to optimized model (#2618) * bug fix for legacy openvino models * Add metadata to optimized model * Revert formatting changes --------- Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> * modify omegaconf version constraint * [release 1.5.0] Fix XAI algorithm for Detection (#2617) Update detection XAI algorithm * Update dependency constraint (#2622) * Update tpp (#2621) * Fix h-label bug of missing parent labels in output (#2626) * Fix h-label bug of missing parent labels in output * Fix h-label test data label schema * Update CHANGELOG.md --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update publish workflow (#2625) update publish workflow to push whl to internal pypi * bump datumaro version to ~=1.5.0 * fixed mistake while mergeing back 1.4.4 * modifiy readme * remove openvino model wrapper class * remove openvino model wrapper tests * [release 1.5.0] DeiT: enable tests + add ViTFeatureVectorHook (#2630) Add ViT feature vector hook * Fix docs broken link to datatumaro_h-label Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix wrong label settings for non-anomaly task ModelAPIs Signed-off-by: Songki Choi <songki.choi@intel.com> * Update publish workflow for tag checking (#2632) * Update e2e tests for XAI Detection (#2634) Fix e2e XAI ref value * Disable QAT for newly added models (#2636) * Update release note and readme (#2637) * update release note and readme * remove package upload step on internal publish wf * update release note and, changelog, and readme * update version string to 1.6.0dev * fix datumaro version to 1.6.0rc0 * Mergeback 1.5.0 to develop (#2642) * Update publish workflow for tag checking (#2632) * Update e2e tests for XAI Detection (#2634) * Disable QAT for newly added models (#2636) * Update release note and readme (#2637) * remove package upload step on internal publish wf * update release note and, changelog, and readme * update version string to 1.6.0dev --------- Co-authored-by: Galina Zalesskaya <galina.zalesskaya@intel.com> Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> * Revert "Mergeback 1.5.0 to develop" (#2645) Revert "Mergeback 1.5.0 to develop (#2642)" This reverts commit 2f67686. * Add a tool to help conduct experiments (#2651) * implement run and experiment * implement experiment result aggregator * refactor experiment.py * refactor run.py * get export model speed * add var collumn * refactor experiment.py * refine a way to update argument in cmd * refine resource tracker * support anomaly on research framework * refine code aggregating exp result * bugfix * make other task available * eval task save avg_time_per_images as result * Add new argument to track CPU&GPU utilization and memory usage (#2500) * add argument to track resource usage * fix bug * fix a bug in a multi gpu case * use total cpu usage * add unit test * add mark to unit test * cover edge case * add pynvml in requirement * align with pre-commit * add license comment * update changelog * refine argument help * align with pre-commit * add version to requirement and raise an error if not supported values are given * apply new resource tracker format * refactor run.py * support optimize in research framework * cover edge case * Handle a case where fail cases exist * make argparse raise error rather than exit if problem exist * revert tensorboard aggregator * bugfix * save failed cases as yaml file * deal with integer in variables * add epoch to metric * use latest log.json file * align with otx logging method * move experiment.py from cli to tools * refactor experiment.py * merge otx run feature into experiment.py * move set_arguments_to_cmd definition into experiment.py * refactor experiment.py * bugfix * minor bugfix * use otx.cli instead of each otx entry * add feature to parse single workspace * add comments * fix bugs * align with pre-commit * revert parser argument * align with pre-commit * Make `max_num_detections` configurable (#2647) * Make max_num_detections configurable * Fix RCNN case with integration test * Apply max_num_detections to train_cfg, too --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Revert inference batch size to 1 for instance segmentation (#2648) Signed-off-by: Songki Choi <songki.choi@intel.com> * Fix CPU training issue on non-CUDA system (#2655) Fix bug that auto adaptive batch size raises an error if CUDA isn't available (#2410) --------- Co-authored-by: Sungman Cho <sungman.cho@intel.com> Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Remove unnecessary log while building a model (#2658) * revert logger in otx/algorithms/detection/adapters/mmdet/utils/builder.py * revert logger in otx/algorithms/classification/adapters/mmcls/utils/builder.py * make change more readable * Fix a minor bug of experiment.py (#2662) fix bug * Not check avg_time_per_image during test (#2665) * ignore avg_time_per_image during test * do not call stdev when length of array is less than 2 * ignore avg_time_per_image during regerssion test * Update docs for enabling sphinx.ext.autosummary (#2654) * fix some errors/warnings on docs source * enable sphinx-autosummary for API reference documentation * Update Makefile * update sphinx configuration * Update PTQ docs (#2672) * Replace POT -> PTQ * Fixes from comments * Update regression tests for develop (#2652) * Update regression tests (#2556) * update reg tests * update test suit * update regression criteria --------- Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> * Exclude py37 target config for cibuildwheel (#2673) * Add `--dryrun` option to tools/experiment.py (#2674) * Fix variable override bug * Add --dryrun option to see experiment list --------- Signed-off-by: Songki Choi <songki.choi@intel.com> * Update OTX explain CLI arguments (#2671) * Change int8 to uint8 to XAI tests * Add probabilities for CLI demo * Rename arguments for explain * Fix pre-commit * Remove extra changes * Fix integration tests * Fix integration "explain_all_classes" test for OV * Fix e2e tests for explain (#2681) * Add README.md for experiment.py (#2688) * write draft readme * refine readme * align with pre-commit * Fix typo in reg test cmd (#2691) * Select more proper model weight file according to commands run just before (#2696) * consider more complex case when prepare eval and optimize * update readme * align with pre-commit * add comment * Add visual prompting zero-shot learning (`learn` & `infer`) (#2616) * Add algobackend & temp configs * Update config * WIP * Fix to enable `algo_backend` * (WIP) Update dataset * (WIP) Update configs * (WIP) Update tasks * (WIP) Update models * Enable `learn` task through otx.train * (WIP) enable `infer` (TODO : normalize points) * Fix when `state_dict` is None * Enable `ZeroShotInferenceCallback` * Enable otx infer * Enable to independently use processor * Revert max_steps * Change `postprocess_masks` to `staticmethod` * Add `PromptGetter` & Enable `learn` and `infer` * precommit * Fix args * Fix typo * Change `id` to `id_` * Fix import * Fix args * precommit * (WIP) Add unit tests * Fix * Add unit tests * Fix * Add integration tests * precommit * Update CHANGELOG.md * Update docstring and type annotations * Fix * precommit * Fix unused args * precommit * Fix * Fix unsupported dtype in ov graph constant converter (#2676) * Fix unsupported dtype in ov graph constant converter * Fix more ov-graph related unit tests * Skip failure TC with adding issue number ref. (#2717) * Fix visual prompting e2e test (#2719) Skip zero-shot e2e * Remove duplicated variable combination in experiment.py (#2713) * Enhance detection & instance segmentation experiment (#2710) * Compute precision and recall along with f-measure * Log performance * Accept ellipse annotation from datumaro format * Fix dataset adapter condition for det/iset * Insert garbage collection btw experiments * Upgrade NNCF & OpenVINO (#2656) * Upgrade OV MAPI and NNCF version * Update demo requirements * Update changelog * Update datumaro * Add rust installation * Update NNCF configs for IS models * Update more fqs * Exclude nncf from upgrade * Revert "Update NNCF configs for IS models" This reverts commit 7c8db8c. * Revert "Update more fqs" This reverts commit 5b91c32. * Revert "Exclude nncf from upgrade" This reverts commit 8926c51. * Update FQs * Revert "Revert "Update NNCF configs for IS models"" This reverts commit f904c0c. * Disable explain for NNCF detection task * Update FQs for anomaly * Update cls FQs * Update datumaro * Update exportable code requirements * Add unit test to cover the changes * Fix multilabel classification class index (#2736) Fix multilabel cls * Refine parsing final score of training in experiment.py (#2738) refine val parser * Make mean teacher algorithm consider distributed training (#2729) * make mean_teacher consider distributed training * align with pre-commit * re-enable test case * move tensor not to cuda but current device * apply comment * Add visual prompting zero-shot learning (`export`, IR inference) (#2706) * Add algobackend & temp configs * Update config * WIP * Fix to enable `algo_backend` * (WIP) Update dataset * (WIP) Update configs * (WIP) Update tasks * (WIP) Update models * Enable `learn` task through otx.train * (WIP) enable `infer` (TODO : normalize points) * Fix when `state_dict` is None * Enable `ZeroShotInferenceCallback` * Enable otx infer * Enable to independently use processor * Revert max_steps * Change `postprocess_masks` to `staticmethod` * Add `PromptGetter` & Enable `learn` and `infer` * precommit * Fix args * Fix typo * Change `id` to `id_` * Fix import * Fix args * precommit * (WIP) Add unit tests * Fix * Add unit tests * Fix * Add integration tests * precommit * Update CHANGELOG.md * Update docstring and type annotations * Fix * precommit * Reuse SAM modules for `export` & Add dataset * Fix * Enable `export` * Convert fp32 * Update logic & tests * Fix & Add prompt getter in `model_adapter_keys` * Initial `Inferencer`, `Task`, and `Model` * Fix to use original mask decoder during inference * Remove internal loop in `PromptGetter` * Update IO * (WIP) Add unit tests for export * Update `PromptGetter` to use only tensor ops * Fix issue about `original_size` disappear in onnx graph * (WIP) Add export unit test * Update * Fix typo * Update * Fix unexpected IF & Update inputs to avoid issues which OV on CPU doesn't support dynamic operations * Enable `PromptGetter` to handle #labels itself * Add ov inferencer * Fix overflow during casting dtype & duplicated cast * Fix * Add unit&integration tests * pre-commit * Fix original vpms * Fix intg & e2e tests * Change mo CLI to API * precommit * Remove blocks * Update CHANGELOG.md * Avoid repeatedly assigning constant tensors/arrays * Fix typo * Automate performance benchmark (#2742) * Add parameterized perf test template * Split acccuracy / perf tests * Automate speed test setting * Add benchmark summary fixture * Add multi/h-label tests * Add detection tests * Add instance segmentationt tests * Add tiling tests * Add semantic segmenation tests * Add anomaly test * Update tools/expreiment.py (#2751) * have constant exp directory name * support to parse dynamic eval output * align with pre-commit * fix minor unit test bug * Add performance benchmark github action workflow (#2762) * Split accuracy & speed benchmark github workflows (#2763) * Fix a bug that error is raised when train set size is greater than minimumof batch size in HPO by exactly 1 (#2760) deal with HPO edge case * Fix a bug that a process tracking resource usage doesn't exit when main process raises an error (#2765) * termiate a process tracking resource usage if main process raises an error * call stop() only if ExitStack is None * Skip large datasets for iSeg perf benchmark (#2766) * Support multiple experiments in single recipe for tools/experiment.py (#2757) * implement draft version * update logging failed cases * align with pre-commit * add doc string * Update README file * fix bugs: single command, failed case output * exclude first epoch from calculating iter time * fix weird name used when there is no variables * align with pre-commit * initialize iter_time and data_time at first * Enable perf benchmark result logging to mlflow server (#2768) * Bump datumaro version to 1.6.0rc1 (#2784) * bump datumaro version to 1.6.0rc1 * remove rust toolchain installation step from workflows * Update perf logging (#2785) * Update perf logging workflow to get branch+sha from gh context (#2791) * update perf logging workflow to get branch+sha from gh context * skip logging when tracking server uri is not configured * Add visual prompting zero-shot learning (optimize, documentation, bug fixes) (#2753) * Fix to resize bbox * (WIP) Add post-checking between masks with different labels * Fix to use the first mask in the first loop * Add post-checking between masks with different labels * pre-commit * Add optimize task * pre-commit * Add e2e * Update documentation * Update CHANGELOG * Check performance benchmark result with reference (#2821) * Average 'Small' (/1 /2 /3) dataset benchmark results * Load perf result with indexing * Add speed ref check for all tasks * Add accuracy ref check for all tasks * Mergeback releases/1.5.0 to develop (#2830) * Update MAPI version (#2730) * Update dependency for exportable code (#2732) * Filter invalid polygon shapes (#2795) --------- Co-authored-by: Vladislav Sovrasov <sovrasov.vlad@gmail.com> Co-authored-by: Eugene Liu <eugene.liu@intel.com> * Create OSSF scorecard workflow (#2831) * Fix ossf/scorecard-action version (#2832) * Update scorecard.yml * Update perf benchmark reference (#2843) * Set default wf permission to read-all (#2882) * Remedy token permission issue (#2888) * remedy token-permission issues - part2 * removed dispatch event from scorecard wf * Add progress callback interface to HPO (#2889) * add progress callback as HPO argument * deal with edge case * Restrict configurable parameters to avoid unreasonable cost for SaaS trial (#2891) * Reduce max value of POT samples to 1k * Reduce max value of num_iters to 1k * Fix pre-commit * Fix more token-permission issues - part3 (#2893) * Resolve pinned-dependency issues on publish_internal workflow (#2907) * Forward unittest workloads to AWS (#2887) * Resolve pinned dependency issues on workflows (#2909) * Fix pinned-dependency issues - part2 (#2911) * Add pinning dependencies (#2916) * Update pip install cmd to use hashes (#2919) * Fix HPO progress callback bug (#2908) fix minor bug * Fix pinned-dependencies issues (#2929) * Remove unused test files (#2930) * Update weekly workflow to run perf tests (#2920) * update weekly workflow to run perf tests * Fix missing fixture in perf test * update input to perf tests for weekly --------- Co-authored-by: Songki Choi <songki.choi@intel.com> * Adjust permission of documentation workflows from pages to contents for writing (#2933) * remove unused import --------- Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com> Signed-off-by: Songki Choi <songki.choi@intel.com> Co-authored-by: Yunchu Lee <yunchu.lee@intel.com> Co-authored-by: Kim, Sungchul <sungchul.kim@intel.com> Co-authored-by: Vinnam Kim <vinnam.kim@intel.com> Co-authored-by: Evgeny Tsykunov <evgeny.tsykunov@intel.com> Co-authored-by: Songki Choi <songki.choi@intel.com> Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com> Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com> Co-authored-by: Sungman Cho <sungman.cho@intel.com> Co-authored-by: Eugene Liu <eugene.liu@intel.com> Co-authored-by: Wonju Lee <wonju.lee@intel.com> Co-authored-by: Dick Ameln <dick.ameln@intel.com> Co-authored-by: Vladislav Sovrasov <sovrasov.vlad@gmail.com> Co-authored-by: sungchul.kim <sungchul@ikvensx010> Co-authored-by: GalyaZalesskaya <galina.zalesskaya@intel.com> Co-authored-by: Harim Kang <harim.kang@intel.com> Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com> Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com> Co-authored-by: sungmanc <sungmanc@intel.com>
1 parent 54751ca commit 6914551

17 files changed

+157
-176
lines changed

.github/workflows/code_scan.yml

+9-2
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,10 @@ jobs:
2020
with:
2121
python-version: "3.10"
2222
- name: Install dependencies
23-
run: python -m pip install tox==4.21.1
23+
run: |
24+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
25+
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
26+
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
2427
- name: Trivy Scanning
2528
env:
2629
TRIVY_DOWNLOAD_URL: ${{ vars.TRIVY_DOWNLOAD_URL }}
@@ -43,7 +46,11 @@ jobs:
4346
with:
4447
python-version: "3.10"
4548
- name: Install dependencies
46-
run: python -m pip install tox==4.21.1
49+
run: |
50+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
51+
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
52+
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
53+
rm /tmp/otx-dev-requirements.txt
4754
- name: Bandit Scanning
4855
run: tox -e bandit-scan
4956
- name: Upload Bandit artifact

.github/workflows/docs.yml

+5-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,11 @@ jobs:
2121
with:
2222
python-version: "3.10"
2323
- name: Install dependencies
24-
run: python -m pip install -r requirements/dev.txt
24+
run: |
25+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
26+
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
27+
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
28+
rm /tmp/otx-dev-requirements.txt
2529
- name: Build-Docs
2630
run: tox -e build-doc
2731
- name: Create gh-pages branch

.github/workflows/docs_stable.yml

+5-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,11 @@ jobs:
2222
with:
2323
python-version: "3.10"
2424
- name: Install dependencies
25-
run: python -m pip install -r requirements/dev.txt
25+
run: |
26+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
27+
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
28+
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
29+
rm /tmp/otx-dev-requirements.txt
2630
- name: Build-Docs
2731
run: tox -e build-doc
2832
- name: Create gh-pages branch

.github/workflows/perf-accuracy.yml

+29-1
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,34 @@ on:
3333
- export
3434
- optimize
3535
default: optimize
36+
artifact-prefix:
37+
type: string
38+
default: perf-accuracy-benchmark
39+
workflow_call:
40+
inputs:
41+
model-type:
42+
type: string
43+
description: Model type to run benchmark [default, all]
44+
default: default
45+
data-size:
46+
type: string
47+
description: Dataset size to run benchmark [small, medium, large, all]
48+
default: all
49+
num-repeat:
50+
type: number
51+
description: Overrides default per-data-size number of repeat setting
52+
default: 0
53+
num-epoch:
54+
type: number
55+
description: Overrides default per-model number of epoch setting
56+
default: 0
57+
eval-upto:
58+
type: string
59+
description: The last operation to evaluate. 'optimize' means all. [train, export, optimize]
60+
default: optimize
61+
artifact-prefix:
62+
type: string
63+
default: perf-accuracy-benchmark
3664

3765
# Declare default permissions as read only.
3866
permissions: read-all
@@ -73,4 +101,4 @@ jobs:
73101
task: ${{ matrix.task }}
74102
timeout-minutes: 8640
75103
upload-artifact: true
76-
artifact-prefix: perf-accuracy-benchmark
104+
artifact-prefix: ${{ inputs.perf-accuracy-benchmark }}

.github/workflows/perf-speed.yml

+29-1
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,34 @@ on:
3333
- export
3434
- optimize
3535
default: optimize
36+
artifact-prefix:
37+
type: string
38+
default: perf-speed-benchmark
39+
workflow_call:
40+
inputs:
41+
model-type:
42+
type: string
43+
description: Model type to run benchmark [default, all]
44+
default: default
45+
data-size:
46+
type: string
47+
description: Dataset size to run benchmark [small, medium, large, all]
48+
default: medium
49+
num-repeat:
50+
type: number
51+
description: Overrides default per-data-size number of repeat setting
52+
default: 1
53+
num-epoch:
54+
type: number
55+
description: Overrides default per-model number of epoch setting
56+
default: 3
57+
eval-upto:
58+
type: string
59+
description: The last operation to evaluate. 'optimize' means all [train, export, optimize]
60+
default: optimize
61+
artifact-prefix:
62+
type: string
63+
default: perf-speed-benchmark
3664

3765
# Declare default permissions as read only.
3866
permissions: read-all
@@ -59,4 +87,4 @@ jobs:
5987
task: all
6088
timeout-minutes: 8640
6189
upload-artifact: true
62-
artifact-prefix: perf-speed-benchmark
90+
artifact-prefix: ${{ inputs.artifact-prefix }}

.github/workflows/pre_merge.yml

+4-2
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,10 @@ jobs:
3131
python-version: "3.10"
3232
- name: Install dependencies
3333
run: |
34-
pip install pip-tools==7.3.0
34+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
3535
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
3636
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
37+
rm /tmp/otx-dev-requirements.txt
3738
- name: Code quality checks
3839
run: tox -vv -e pre-commit-all-py310-pt1
3940
Unit-Test:
@@ -79,9 +80,10 @@ jobs:
7980
python-version: "3.8"
8081
- name: Install dependencies
8182
run: |
82-
pip install pip-tools==7.3.0
83+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
8384
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
8485
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
86+
rm /tmp/otx-dev-requirements.txt
8587
- name: Run unit test
8688
run: tox -vv -e unittest-all-py38-pt1
8789
- name: Upload coverage artifact

.github/workflows/publish.yml

+2-1
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,10 @@ jobs:
3333
python-version: "3.10"
3434
- name: Install pypa/build
3535
run: |
36-
pip install pip-tools==7.3.0
36+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
3737
pip-compile --generate-hashes -o /tmp/otx-publish-requirements.txt requirements/publish.txt
3838
pip install --require-hashes --no-deps -r /tmp/otx-publish-requirements.txt
39+
rm /tmp/otx-publish-requirements.txt
3940
- name: Build sdist
4041
run: python -m build --sdist
4142
- uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3

.github/workflows/publish_internal.yml

+4-2
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,10 @@ jobs:
3131
python-version: "3.10"
3232
- name: Install pypa/build
3333
run: |
34-
pip install pip-tools==7.3.0
34+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
3535
pip-compile --generate-hashes -o /tmp/otx-publish-requirements.txt requirements/publish.txt
3636
pip install --require-hashes --no-deps -r /tmp/otx-publish-requirements.txt
37+
rm /tmp/otx-publish-requirements.txt
3738
- name: Build sdist
3839
run: python -m build --sdist
3940
- uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
@@ -56,9 +57,10 @@ jobs:
5657
python-version: "3.10"
5758
- name: Install dependencies
5859
run: |
59-
pip install pip-tools==7.3.0
60+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
6061
pip-compile --generate-hashes -o /tmp/otx-publish-requirements.txt requirements/publish.txt
6162
pip install --require-hashes --no-deps -r /tmp/otx-publish-requirements.txt
63+
rm /tmp/otx-publish-requirements.txt
6264
- name: Download artifacts
6365
uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2
6466
with:

.github/workflows/run_tests_in_tox.yml

+2-1
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,10 @@ jobs:
5252
python-version: ${{ inputs.python-version }}
5353
- name: Install dependencies
5454
run: |
55-
pip install pip-tools==7.3.0
55+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
5656
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
5757
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
58+
rm /tmp/otx-dev-requirements.txt
5859
- name: Run Tests
5960
env:
6061
MLFLOW_TRACKING_SERVER_URI: ${{ vars.MLFLOW_TRACKING_SERVER_URI }}

.github/workflows/run_tests_in_tox_custom.yml

+2-1
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,10 @@ jobs:
5858
python-version: ${{ inputs.python-version }}
5959
- name: Install dependencies
6060
run: |
61-
pip install pip-tools==7.3.0
61+
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
6262
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
6363
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
64+
rm /tmp/otx-dev-requirements.txt
6465
- name: Run Tests
6566
env:
6667
MLFLOW_TRACKING_SERVER_URI: ${{ vars.MLFLOW_TRACKING_SERVER_URI }}

.github/workflows/weekly.yml

+19-37
Original file line numberDiff line numberDiff line change
@@ -10,41 +10,23 @@ on:
1010
permissions: read-all
1111

1212
jobs:
13-
Regression-Tests:
14-
strategy:
15-
fail-fast: false
16-
matrix:
17-
include:
18-
- toxenv_task: "iseg"
19-
test_dir: "tests/regression/instance_segmentation/test_instance_segmentation.py"
20-
task: "instance_segmentation"
21-
- toxenv_task: "iseg_t"
22-
test_dir: "tests/regression/instance_segmentation/test_tiling_instance_segmentation.py"
23-
task: "instance_segmentation"
24-
- toxenv_task: "seg"
25-
test_dir: "tests/regression/semantic_segmentation"
26-
task: "segmentation"
27-
- toxenv_task: "det"
28-
test_dir: "tests/regression/detection"
29-
task: "detection"
30-
- toxenv_task: "ano"
31-
test_dir: "tests/regression/anomaly"
32-
task: "anomaly"
33-
- toxenv_task: "act"
34-
test_dir: "tests/regression/action"
35-
task: "action"
36-
- toxenv_task: "cls"
37-
test_dir: "tests/regression/classification"
38-
task: "classification"
39-
name: Regression-Test-py310-${{ matrix.toxenv_task }}
40-
uses: ./.github/workflows/run_tests_in_tox.yml
13+
Performance-Speed-Tests:
14+
name: Performance-Speed-py310
15+
uses: ./.github/workflows/perf-speed.yml
4116
with:
42-
python-version: "3.10"
43-
toxenv-pyver: "py310"
44-
toxenv-task: ${{ matrix.toxenv_task }}
45-
tests-dir: ${{ matrix.test_dir }}
46-
runs-on: "['self-hosted', 'Linux', 'X64', 'dmount']"
47-
task: ${{ matrix.task }}
48-
timeout-minutes: 8640
49-
upload-artifact: true
50-
artifact-prefix: "weekly-test-results"
17+
model-type: default
18+
data-size: medium
19+
num-repeat: 1
20+
num-epoch: 3
21+
eval-upto: optimize
22+
artifact-prefix: weekly-perf-speed-benchmark
23+
Performance-Accuracy-Tests:
24+
name: Performance-Accuracy-py310
25+
uses: ./.github/workflows/perf-accuracy.yml
26+
with:
27+
model-type: default
28+
data-size: all
29+
num-repeat: 0
30+
num-epoch: 0
31+
eval-upto: optimize
32+
artifact-prefix: weekly-perf-accuracy-benchmark

requirements/gh-actions.txt

+45
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
#
2+
# This file is autogenerated by pip-compile with Python 3.10
3+
# by the following command:
4+
#
5+
# pip-compile --generate-hashes --output-file=requirements.txt requirements/gh-actions.txt
6+
#
7+
build==1.0.3 \
8+
--hash=sha256:538aab1b64f9828977f84bc63ae570b060a8ed1be419e7870b8b4fc5e6ea553b \
9+
--hash=sha256:589bf99a67df7c9cf07ec0ac0e5e2ea5d4b37ac63301c4986d1acb126aa83f8f
10+
# via pip-tools
11+
click==8.1.7 \
12+
--hash=sha256:ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28 \
13+
--hash=sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de
14+
# via pip-tools
15+
packaging==23.2 \
16+
--hash=sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5 \
17+
--hash=sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7
18+
# via build
19+
pip-tools==7.4.0 \
20+
--hash=sha256:a92a6ddfa86ff389fe6ace381d463bc436e2c705bd71d52117c25af5ce867bb7 \
21+
--hash=sha256:b67432fd0759ed834c5367f9e0ce8c95441acecfec9c8e24b41aca166757adf0
22+
# via -r requirements/gh-actions.txt
23+
pyproject-hooks==1.0.0 \
24+
--hash=sha256:283c11acd6b928d2f6a7c73fa0d01cb2bdc5f07c57a2eeb6e83d5e56b97976f8 \
25+
--hash=sha256:f271b298b97f5955d53fb12b72c1fb1948c22c1a6b70b315c54cedaca0264ef5
26+
# via
27+
# build
28+
# pip-tools
29+
tomli==2.0.1 \
30+
--hash=sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc \
31+
--hash=sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f
32+
# via
33+
# build
34+
# pip-tools
35+
# pyproject-hooks
36+
wheel==0.42.0 \
37+
--hash=sha256:177f9c9b0d45c47873b619f5b650346d632cdc35fb5e4d25058e09c9e581433d \
38+
--hash=sha256:c45be39f7882c9d34243236f2d63cbd58039e360f85d0913425fbd7ceea617a8
39+
# via pip-tools
40+
41+
# WARNING: The following packages were not pinned, but pip requires them to be
42+
# pinned when the requirements file includes hashes and the requirement is not
43+
# satisfied by a package already installed. Consider using the --allow-unsafe flag.
44+
# pip
45+
# setuptools

src/otx/algorithms/classification/adapters/mmcls/models/heads/custom_vision_transformer_head.py

-11
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,6 @@
66
from mmcls.models.builder import HEADS
77
from mmcls.models.heads import VisionTransformerClsHead
88

9-
from otx.algorithms.common.utils import cast_bf16_to_fp32
10-
119

1210
@HEADS.register_module()
1311
class CustomVisionTransformerClsHead(VisionTransformerClsHead):
@@ -34,15 +32,6 @@ def loss(self, cls_score, gt_label, feature=None):
3432
losses["loss"] = loss
3533
return losses
3634

37-
def post_process(self, pred):
38-
"""Post processing."""
39-
pred = cast_bf16_to_fp32(pred)
40-
return super().post_process(pred)
41-
42-
def forward(self, x):
43-
"""Forward fuction of CustomVisionTransformerClsHead class."""
44-
return self.simple_test(x)
45-
4635
def forward_train(self, x, gt_label, **kwargs):
4736
"""Forward_train fuction of CustomVisionTransformerClsHead class."""
4837
x = self.pre_logits(x)

tests/perf/test_classification.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ class TestPerfSingleLabelClassification:
5252

5353
@pytest.mark.parametrize("fxt_model_id", MODEL_TEMPLATES, ids=MODEL_IDS, indirect=True)
5454
@pytest.mark.parametrize("fxt_benchmark", BENCHMARK_CONFIGS.items(), ids=BENCHMARK_CONFIGS.keys(), indirect=True)
55-
def test_accuracy(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark):
55+
def test_accuracy(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_check_benchmark_result: Callable):
5656
"""Benchmark accruacy metrics."""
5757
result = fxt_benchmark.run(
5858
model_id=fxt_model_id,
@@ -301,7 +301,7 @@ def test_accuracy(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_chec
301301

302302
@pytest.mark.parametrize("fxt_model_id", MODEL_TEMPLATES, ids=MODEL_IDS, indirect=True)
303303
@pytest.mark.parametrize("fxt_benchmark", BENCHMARK_CONFIGS.items(), ids=BENCHMARK_CONFIGS.keys(), indirect=True)
304-
def test_speed(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_check_benchmark_results: Callable):
304+
def test_speed(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_check_benchmark_result: Callable):
305305
"""Benchmark train time per iter / infer time per image."""
306306
fxt_benchmark.track_resources = True
307307
result = fxt_benchmark.run(

tests/run_code_checks.sh

-22
This file was deleted.

0 commit comments

Comments
 (0)