Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plotting trainnig curves leads to Matplotlib not responding error #2507

Closed
lucas-sancere opened this issue Jan 23, 2023 · 7 comments
Closed
Assignees

Comments

@lucas-sancere
Copy link

lucas-sancere commented Jan 23, 2023

Hi, thanks you for your amazing work and support!

I have an issue while plotting the training curves, the displayed image is black and I have a "Matplotlib not responding" error message.

My mmsegmentation conda env should contain all the packages needed (I installed the req and seaborn as asked), here are my seaborn and matplotlib versions:

  • seaborn 0.12.2
  • matplotlib 3.6.3

I am using the original code.

I ran

python tools/analyze_logs.py logs/20230118_181102.log.json --keys loss --legend loss

as explained in the documentation.

My json file (from what I want the logs to be plotted) is the following:

{"env_info": "sys.platform: linux\nPython: 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:49:35) [GCC 10.4.0]\nCUDA available: True\nGPU 0,1,2,3: Tesla V100-SXM2-32GB\nCUDA_HOME: /usr/local/cuda-11.6\nNVCC: Cuda compilation tools, release 11.6, V11.6.124\nGCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\nPyTorch: 1.13.1\nPyTorch compiling details: PyTorch built with:\n  - GCC 9.3\n  - C++ Version: 201402\n  - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications\n  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\n  - OpenMP 201511 (a.k.a. OpenMP 4.5)\n  - LAPACK is enabled (usually provided by MKL)\n  - NNPACK is enabled\n  - CPU capability usage: AVX2\n  - CUDA Runtime 11.6\n  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37\n  - CuDNN 8.3.2  (built against CUDA 11.5)\n  - Magma 2.6.1\n  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n\nTorchVision: 0.14.1\nOpenCV: 4.7.0\nMMCV: 1.7.1\nMMCV Compiler: GCC 9.4\nMMCV CUDA Compiler: not available\nMMSegmentation: 0.30.0+6d7a5b9", "seed": 857420422, "exp_name": "segmenter_vit-l_SCC_mask_8x1_640x640_160k_ade20k.py", "mmseg_version": "0.30.0+6d7a5b9", "config": "checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_large_p16_384_20220308-d4efb41d.pth'\nbackbone_norm_cfg = dict(type='LN', eps=1e-06, requires_grad=True)\nmodel = dict(\n    type='EncoderDecoder',\n    pretrained=\n    'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_large_p16_384_20220308-d4efb41d.pth',\n    backbone=dict(\n        type='VisionTransformer',\n        img_size=(640, 640),\n        patch_size=16,\n        in_channels=3,\n        embed_dims=1024,\n        num_layers=24,\n        num_heads=16,\n        drop_path_rate=0.1,\n        attn_drop_rate=0.0,\n        drop_rate=0.0,\n        final_norm=True,\n        norm_cfg=dict(type='LN', eps=1e-06, requires_grad=True),\n        with_cls_token=True,\n        interpolate_mode='bicubic',\n        pretrained=\n        'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_large_p16_384_20220308-d4efb41d.pth'\n    ),\n    decode_head=dict(\n        type='SegmenterMaskTransformerHead',\n        in_channels=1024,\n        channels=1024,\n        num_classes=150,\n        num_layers=2,\n        num_heads=16,\n        embed_dims=1024,\n        dropout_ratio=0.0,\n        loss_decode=dict(\n            type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),\n    test_cfg=dict(mode='slide', crop_size=(640, 640), stride=(608, 608)))\ndataset_type = 'UQDataset'\ndata_root = 'data/UQNon-Melanoma/data_tumor/'\nimg_norm_cfg = dict(\n    mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=True)\ncrop_size = (640, 640)\ntrain_pipeline = [\n    dict(type='LoadImageFromFile'),\n    dict(type='LoadAnnotations', reduce_zero_label=True),\n    dict(type='Resize', img_scale=(2560, 640), ratio_range=(0.5, 2.0)),\n    dict(type='RandomCrop', crop_size=(640, 640), cat_max_ratio=0.75),\n    dict(type='RandomFlip', prob=0.5),\n    dict(type='PhotoMetricDistortion'),\n    dict(\n        type='Normalize',\n        mean=[127.5, 127.5, 127.5],\n        std=[127.5, 127.5, 127.5],\n        to_rgb=True),\n    dict(type='Pad', size=(640, 640), pad_val=0, seg_pad_val=255),\n    dict(type='DefaultFormatBundle'),\n    dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n]\ntest_pipeline = [\n    dict(type='LoadImageFromFile'),\n    dict(\n        type='MultiScaleFlipAug',\n        img_scale=(2560, 640),\n        flip=False,\n        transforms=[\n            dict(type='Resize', keep_ratio=True),\n            dict(type='RandomFlip'),\n            dict(\n                type='Normalize',\n                mean=[127.5, 127.5, 127.5],\n                std=[127.5, 127.5, 127.5],\n                to_rgb=True),\n            dict(type='ImageToTensor', keys=['img']),\n            dict(type='Collect', keys=['img'])\n        ])\n]\ndata = dict(\n    samples_per_gpu=1,\n    workers_per_gpu=4,\n    train=dict(\n        type='UQDataset',\n        data_root='data/UQNon-Melanoma/data_tumor/',\n        img_dir='training/images',\n        ann_dir='training/annotations',\n        pipeline=[\n            dict(type='LoadImageFromFile'),\n            dict(type='LoadAnnotations', reduce_zero_label=True),\n            dict(type='Resize', img_scale=(2560, 640), ratio_range=(0.5, 2.0)),\n            dict(type='RandomCrop', crop_size=(640, 640), cat_max_ratio=0.75),\n            dict(type='RandomFlip', prob=0.5),\n            dict(type='PhotoMetricDistortion'),\n            dict(\n                type='Normalize',\n                mean=[127.5, 127.5, 127.5],\n                std=[127.5, 127.5, 127.5],\n                to_rgb=True),\n            dict(type='Pad', size=(640, 640), pad_val=0, seg_pad_val=255),\n            dict(type='DefaultFormatBundle'),\n            dict(type='Collect', keys=['img', 'gt_semantic_seg'])\n        ]),\n    val=dict(\n        type='UQDataset',\n        data_root='data/UQNon-Melanoma/data_tumor/',\n        img_dir='validation/images',\n        ann_dir='validation/annotations',\n        pipeline=[\n            dict(type='LoadImageFromFile'),\n            dict(\n                type='MultiScaleFlipAug',\n                img_scale=(2560, 640),\n                flip=False,\n                transforms=[\n                    dict(type='Resize', keep_ratio=True),\n                    dict(type='RandomFlip'),\n                    dict(\n                        type='Normalize',\n                        mean=[127.5, 127.5, 127.5],\n                        std=[127.5, 127.5, 127.5],\n                        to_rgb=True),\n                    dict(type='ImageToTensor', keys=['img']),\n                    dict(type='Collect', keys=['img'])\n                ])\n        ]),\n    test=dict(\n        type='UQDataset',\n        data_root='data/UQNon-Melanoma/data_tumor/',\n        img_dir='validation/images',\n        ann_dir='validation/annotations',\n        pipeline=[\n            dict(type='LoadImageFromFile'),\n            dict(\n                type='MultiScaleFlipAug',\n                img_scale=(2560, 640),\n                flip=False,\n                transforms=[\n                    dict(type='Resize', keep_ratio=True),\n                    dict(type='RandomFlip'),\n                    dict(\n                        type='Normalize',\n                        mean=[127.5, 127.5, 127.5],\n                        std=[127.5, 127.5, 127.5],\n                        to_rgb=True),\n                    dict(type='ImageToTensor', keys=['img']),\n                    dict(type='Collect', keys=['img'])\n                ])\n        ]))\nlog_config = dict(\n    interval=50, hooks=[dict(type='TextLoggerHook', by_epoch=False)])\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\ncudnn_benchmark = True\noptimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0)\noptimizer_config = dict()\nlr_config = dict(policy='poly', power=0.9, min_lr=0.0001, by_epoch=False)\nrunner = dict(type='IterBasedRunner', max_iters=160000)\ncheckpoint_config = dict(by_epoch=False, interval=16000)\nevaluation = dict(interval=16000, metric='mIoU', pre_eval=True)\nwork_dir = './work_dirs/segmenter_vit-l_SCC_mask_8x1_640x640_160k_ade20k'\ngpu_ids = range(0, 4)\nauto_resume = False\ndevice = 'cuda'\nseed = 857420422\n", "CLASSES": ["background", "tissue"], "PALETTE": [[0, 0, 0], [255, 255, 255]]}
{"mode": "train", "epoch": 4, "iter": 50, "lr": 0.001, "memory": 12604, "data_time": 1.39235, "decode.loss_ce": 0.05515, "decode.acc_seg": 94.1289, "loss": 0.05515, "time": 2.47276}
{"mode": "train", "epoch": 8, "iter": 100, "lr": 0.001, "memory": 12604, "data_time": 0.4728, "decode.loss_ce": 0.00042, "decode.acc_seg": 100.0, "loss": 0.00042, "time": 2.2488}
{"mode": "train", "epoch": 11, "iter": 150, "lr": 0.001, "memory": 12604, "data_time": 1.00362, "decode.loss_ce": 0.00031, "decode.acc_seg": 100.0, "loss": 0.00031, "time": 1.99797}
{"mode": "train", "epoch": 15, "iter": 200, "lr": 0.001, "memory": 12604, "data_time": 0.6951, "decode.loss_ce": 0.00024, "decode.acc_seg": 100.0, "loss": 0.00024, "time": 2.3227}
{"mode": "train", "epoch": 18, "iter": 250, "lr": 0.001, "memory": 12604, "data_time": 0.88364, "decode.loss_ce": 0.00023, "decode.acc_seg": 100.0, "loss": 0.00023, "time": 2.19709}
{"mode": "train", "epoch": 22, "iter": 300, "lr": 0.001, "memory": 12604, "data_time": 0.86131, "decode.loss_ce": 0.00021, "decode.acc_seg": 100.0, "loss": 0.00021, "time": 2.27652}
{"mode": "train", "epoch": 25, "iter": 350, "lr": 0.001, "memory": 12604, "data_time": 0.30892, "decode.loss_ce": 0.0002, "decode.acc_seg": 100.0, "loss": 0.0002, "time": 1.87786}
{"mode": "train", "epoch": 29, "iter": 400, "lr": 0.001, "memory": 12604, "data_time": 0.64722, "decode.loss_ce": 0.00019, "decode.acc_seg": 100.0, "loss": 0.00019, "time": 2.37347}
{"mode": "train", "epoch": 33, "iter": 450, "lr": 0.001, "memory": 12604, "data_time": 0.98634, "decode.loss_ce": 0.00019, "decode.acc_seg": 100.0, "loss": 0.00019, "time": 2.18088}
{"mode": "train", "epoch": 36, "iter": 500, "lr": 0.001, "memory": 12604, "data_time": 0.88684, "decode.loss_ce": 0.00018, "decode.acc_seg": 100.0, "loss": 0.00018, "time": 2.08157}
{"mode": "train", "epoch": 40, "iter": 550, "lr": 0.001, "memory": 12604, "data_time": 0.46206, "decode.loss_ce": 0.00017, "decode.acc_seg": 100.0, "loss": 0.00017, "time": 2.29777}
{"mode": "train", "epoch": 43, "iter": 600, "lr": 0.001, "memory": 12604, "data_time": 0.40531, "decode.loss_ce": 0.00016, "decode.acc_seg": 100.0, "loss": 0.00016, "time": 2.10088}
{"mode": "train", "epoch": 47, "iter": 650, "lr": 0.001, "memory": 12604, "data_time": 0.61208, "decode.loss_ce": 0.00017, "decode.acc_seg": 100.0, "loss": 0.00017, "time": 2.49792}
{"mode": "train", "epoch": 50, "iter": 700, "lr": 0.001, "memory": 12604, "data_time": 0.63873, "decode.loss_ce": 0.00016, "decode.acc_seg": 100.0, "loss": 0.00016, "time": 2.01552}
{"mode": "train", "epoch": 54, "iter": 750, "lr": 0.001, "memory": 12604, "data_time": 0.82535, "decode.loss_ce": 0.00015, "decode.acc_seg": 100.0, "loss": 0.00015, "time": 2.26153}
{"mode": "train", "epoch": 58, "iter": 800, "lr": 0.001, "memory": 12604, "data_time": 0.52934, "decode.loss_ce": 0.00014, "decode.acc_seg": 100.0, "loss": 0.00014, "time": 2.28959}
{"mode": "train", "epoch": 61, "iter": 850, "lr": 0.001, "memory": 12604, "data_time": 0.31338, "decode.loss_ce": 0.00014, "decode.acc_seg": 100.0, "loss": 0.00014, "time": 1.93299}
{"mode": "train", "epoch": 65, "iter": 900, "lr": 0.001, "memory": 12604, "data_time": 0.60283, "decode.loss_ce": 0.00014, "decode.acc_seg": 100.0, "loss": 0.00014, "time": 2.12194}
{"mode": "train", "epoch": 68, "iter": 950, "lr": 0.001, "memory": 12604, "data_time": 0.60519, "decode.loss_ce": 0.00014, "decode.acc_seg": 100.0, "loss": 0.00014, "time": 2.00718}
{"mode": "train", "epoch": 72, "iter": 1000, "lr": 0.00099, "memory": 12604, "data_time": 0.59255, "decode.loss_ce": 0.00012, "decode.acc_seg": 100.0, "loss": 0.00012, "time": 2.33037}
{"mode": "train", "epoch": 75, "iter": 1050, "lr": 0.00099, "memory": 12604, "data_time": 0.45557, "decode.loss_ce": 0.00013, "decode.acc_seg": 100.0, "loss": 0.00013, "time": 1.98173}
{"mode": "train", "epoch": 79, "iter": 1100, "lr": 0.00099, "memory": 12604, "data_time": 0.53828, "decode.loss_ce": 0.00013, "decode.acc_seg": 100.0, "loss": 0.00013, "time": 2.24311}
{"mode": "train", "epoch": 83, "iter": 1150, "lr": 0.00099, "memory": 12604, "data_time": 0.64166, "decode.loss_ce": 0.00013, "decode.acc_seg": 100.0, "loss": 0.00013, "time": 2.18542}
{"mode": "train", "epoch": 86, "iter": 1200, "lr": 0.00099, "memory": 12604, "data_time": 1.03963, "decode.loss_ce": 0.00013, "decode.acc_seg": 100.0, "loss": 0.00013, "time": 2.19844}
{"mode": "train", "epoch": 90, "iter": 1250, "lr": 0.00099, "memory": 12604, "data_time": 1.01109, "decode.loss_ce": 0.00013, "decode.acc_seg": 100.0, "loss": 0.00013, "time": 2.1746}
{"mode": "train", "epoch": 93, "iter": 1300, "lr": 0.00099, "memory": 12604, "data_time": 0.76242, "decode.loss_ce": 0.00012, "decode.acc_seg": 100.0, "loss": 0.00012, "time": 1.79542}
{"mode": "train", "epoch": 97, "iter": 1350, "lr": 0.00099, "memory": 12604, "data_time": 0.5948, "decode.loss_ce": 0.00011, "decode.acc_seg": 100.0, "loss": 0.00011, "time": 2.13739}
{"mode": "train", "epoch": 100, "iter": 1400, "lr": 0.00099, "memory": 12604, "data_time": 0.70092, "decode.loss_ce": 0.00012, "decode.acc_seg": 100.0, "loss": 0.00012, "time": 2.02334}
{"mode": "train", "epoch": 104, "iter": 1450, "lr": 0.00099, "memory": 12604, "data_time": 1.12407, "decode.loss_ce": 0.00011, "decode.acc_seg": 100.0, "loss": 0.00011, "time": 2.22654}
{"mode": "train", "epoch": 108, "iter": 1500, "lr": 0.00099, "memory": 12604, "data_time": 0.71571, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.08956}
{"mode": "train", "epoch": 111, "iter": 1550, "lr": 0.00099, "memory": 12604, "data_time": 0.98758, "decode.loss_ce": 0.00011, "decode.acc_seg": 100.0, "loss": 0.00011, "time": 2.14244}
{"mode": "train", "epoch": 115, "iter": 1600, "lr": 0.00099, "memory": 12604, "data_time": 0.76038, "decode.loss_ce": 0.00011, "decode.acc_seg": 100.0, "loss": 0.00011, "time": 2.22721}
{"mode": "train", "epoch": 118, "iter": 1650, "lr": 0.00099, "memory": 12604, "data_time": 0.71202, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.06086}
{"mode": "train", "epoch": 122, "iter": 1700, "lr": 0.00099, "memory": 12604, "data_time": 0.74418, "decode.loss_ce": 0.00011, "decode.acc_seg": 100.0, "loss": 0.00011, "time": 2.2318}
{"mode": "train", "epoch": 125, "iter": 1750, "lr": 0.00099, "memory": 12604, "data_time": 0.58646, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.12177}
{"mode": "train", "epoch": 129, "iter": 1800, "lr": 0.00099, "memory": 12604, "data_time": 1.0636, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.38337}
{"mode": "train", "epoch": 133, "iter": 1850, "lr": 0.00099, "memory": 12604, "data_time": 0.61994, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.34883}
{"mode": "train", "epoch": 136, "iter": 1900, "lr": 0.00099, "memory": 12604, "data_time": 0.53816, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.12589}
{"mode": "train", "epoch": 140, "iter": 1950, "lr": 0.00099, "memory": 12604, "data_time": 0.98254, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.21288}
{"mode": "train", "epoch": 143, "iter": 2000, "lr": 0.00099, "memory": 12604, "data_time": 0.40593, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 1.95001}
{"mode": "train", "epoch": 147, "iter": 2050, "lr": 0.00099, "memory": 12604, "data_time": 0.8078, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.15667}
{"mode": "train", "epoch": 150, "iter": 2100, "lr": 0.00099, "memory": 12604, "data_time": 0.55432, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.21421}
{"mode": "train", "epoch": 154, "iter": 2150, "lr": 0.00099, "memory": 12604, "data_time": 0.48614, "decode.loss_ce": 0.0001, "decode.acc_seg": 100.0, "loss": 0.0001, "time": 2.38011}
{"mode": "train", "epoch": 158, "iter": 2200, "lr": 0.00099, "memory": 12604, "data_time": 0.44334, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.26885}
{"mode": "train", "epoch": 161, "iter": 2250, "lr": 0.00099, "memory": 12604, "data_time": 0.72191, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 1.95722}
{"mode": "train", "epoch": 165, "iter": 2300, "lr": 0.00099, "memory": 12604, "data_time": 0.807, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.1057}
{"mode": "train", "epoch": 168, "iter": 2350, "lr": 0.00099, "memory": 12604, "data_time": 0.80236, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.29605}
{"mode": "train", "epoch": 172, "iter": 2400, "lr": 0.00099, "memory": 12604, "data_time": 0.77599, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.27897}
{"mode": "train", "epoch": 175, "iter": 2450, "lr": 0.00099, "memory": 12604, "data_time": 0.41943, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.10878}
{"mode": "train", "epoch": 179, "iter": 2500, "lr": 0.00099, "memory": 12604, "data_time": 0.81445, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.11642}
{"mode": "train", "epoch": 183, "iter": 2550, "lr": 0.00099, "memory": 12604, "data_time": 0.7095, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.24751}
{"mode": "train", "epoch": 186, "iter": 2600, "lr": 0.00099, "memory": 12604, "data_time": 1.06163, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.09796}
{"mode": "train", "epoch": 190, "iter": 2650, "lr": 0.00099, "memory": 12604, "data_time": 0.49737, "decode.loss_ce": 9e-05, "decode.acc_seg": 100.0, "loss": 9e-05, "time": 2.19205}
{"mode": "train", "epoch": 193, "iter": 2700, "lr": 0.00099, "memory": 12604, "data_time": 0.65844, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.05518}
{"mode": "train", "epoch": 197, "iter": 2750, "lr": 0.00099, "memory": 12604, "data_time": 0.63725, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.08564}
{"mode": "train", "epoch": 200, "iter": 2800, "lr": 0.00099, "memory": 12604, "data_time": 1.15972, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.23976}
{"mode": "train", "epoch": 204, "iter": 2850, "lr": 0.00099, "memory": 12604, "data_time": 0.54562, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.42791}
{"mode": "train", "epoch": 208, "iter": 2900, "lr": 0.00099, "memory": 12604, "data_time": 0.45023, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.42966}
{"mode": "train", "epoch": 211, "iter": 2950, "lr": 0.00099, "memory": 12604, "data_time": 1.08513, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 1.99757}
{"mode": "train", "epoch": 215, "iter": 3000, "lr": 0.00098, "memory": 12604, "data_time": 0.81872, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.27621}
{"mode": "train", "epoch": 218, "iter": 3050, "lr": 0.00098, "memory": 12604, "data_time": 0.51123, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.12881}
{"mode": "train", "epoch": 222, "iter": 3100, "lr": 0.00098, "memory": 12604, "data_time": 0.43562, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.38123}
{"mode": "train", "epoch": 225, "iter": 3150, "lr": 0.00098, "memory": 12604, "data_time": 0.73829, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.04172}
{"mode": "train", "epoch": 229, "iter": 3200, "lr": 0.00098, "memory": 12604, "data_time": 0.31451, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.15284}
{"mode": "train", "epoch": 233, "iter": 3250, "lr": 0.00098, "memory": 12604, "data_time": 0.44961, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.23736}
{"mode": "train", "epoch": 236, "iter": 3300, "lr": 0.00098, "memory": 12604, "data_time": 0.49574, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.01236}
{"mode": "train", "epoch": 240, "iter": 3350, "lr": 0.00098, "memory": 12604, "data_time": 0.66981, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.17594}
{"mode": "train", "epoch": 243, "iter": 3400, "lr": 0.00098, "memory": 12604, "data_time": 0.52595, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.2365}
{"mode": "train", "epoch": 247, "iter": 3450, "lr": 0.00098, "memory": 12604, "data_time": 0.97919, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.20689}
{"mode": "train", "epoch": 250, "iter": 3500, "lr": 0.00098, "memory": 12604, "data_time": 0.53007, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.01288}
{"mode": "train", "epoch": 254, "iter": 3550, "lr": 0.00098, "memory": 12604, "data_time": 0.54836, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.35708}
{"mode": "train", "epoch": 258, "iter": 3600, "lr": 0.00098, "memory": 12604, "data_time": 0.75283, "decode.loss_ce": 8e-05, "decode.acc_seg": 100.0, "loss": 8e-05, "time": 2.10694}
{"mode": "train", "epoch": 261, "iter": 3650, "lr": 0.00098, "memory": 12604, "data_time": 0.52109, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.11073}
{"mode": "train", "epoch": 265, "iter": 3700, "lr": 0.00098, "memory": 12604, "data_time": 0.59775, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.24849}
{"mode": "train", "epoch": 268, "iter": 3750, "lr": 0.00098, "memory": 12604, "data_time": 0.52592, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.03795}
{"mode": "train", "epoch": 272, "iter": 3800, "lr": 0.00098, "memory": 12604, "data_time": 1.1889, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.15142}
{"mode": "train", "epoch": 275, "iter": 3850, "lr": 0.00098, "memory": 12604, "data_time": 0.75012, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.07365}
{"mode": "train", "epoch": 279, "iter": 3900, "lr": 0.00098, "memory": 12604, "data_time": 0.74881, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.18267}
{"mode": "train", "epoch": 283, "iter": 3950, "lr": 0.00098, "memory": 12604, "data_time": 0.86557, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.02202}
{"mode": "train", "epoch": 286, "iter": 4000, "lr": 0.00098, "memory": 12604, "data_time": 0.35931, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.0409}
{"mode": "train", "epoch": 290, "iter": 4050, "lr": 0.00098, "memory": 12604, "data_time": 0.51892, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.17536}
{"mode": "train", "epoch": 293, "iter": 4100, "lr": 0.00098, "memory": 12604, "data_time": 0.87109, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.08588}
{"mode": "train", "epoch": 297, "iter": 4150, "lr": 0.00098, "memory": 12604, "data_time": 0.8966, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.34713}
{"mode": "train", "epoch": 300, "iter": 4200, "lr": 0.00098, "memory": 12604, "data_time": 0.42729, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.03284}
{"mode": "train", "epoch": 304, "iter": 4250, "lr": 0.00098, "memory": 12604, "data_time": 0.59544, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.0918}
{"mode": "train", "epoch": 308, "iter": 4300, "lr": 0.00098, "memory": 12604, "data_time": 0.68606, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.18039}
{"mode": "train", "epoch": 311, "iter": 4350, "lr": 0.00098, "memory": 12604, "data_time": 0.6501, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.10028}
{"mode": "train", "epoch": 315, "iter": 4400, "lr": 0.00098, "memory": 12604, "data_time": 0.57763, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.18542}
{"mode": "train", "epoch": 318, "iter": 4450, "lr": 0.00098, "memory": 12604, "data_time": 0.63739, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.21427}
{"mode": "train", "epoch": 322, "iter": 4500, "lr": 0.00098, "memory": 12604, "data_time": 1.30529, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.22166}
{"mode": "train", "epoch": 325, "iter": 4550, "lr": 0.00098, "memory": 12604, "data_time": 0.876, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.02313}
{"mode": "train", "epoch": 329, "iter": 4600, "lr": 0.00098, "memory": 12604, "data_time": 0.78545, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.32957}
{"mode": "train", "epoch": 333, "iter": 4650, "lr": 0.00098, "memory": 12604, "data_time": 0.90142, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.26}
{"mode": "train", "epoch": 336, "iter": 4700, "lr": 0.00098, "memory": 12604, "data_time": 0.4081, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 1.94287}
{"mode": "train", "epoch": 340, "iter": 4750, "lr": 0.00098, "memory": 12604, "data_time": 1.24624, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.52696}
{"mode": "train", "epoch": 343, "iter": 4800, "lr": 0.00098, "memory": 12604, "data_time": 0.60035, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.22871}
{"mode": "train", "epoch": 347, "iter": 4850, "lr": 0.00098, "memory": 12604, "data_time": 0.64364, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.29864}
{"mode": "train", "epoch": 350, "iter": 4900, "lr": 0.00098, "memory": 12604, "data_time": 0.7803, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 1.9395}
{"mode": "train", "epoch": 354, "iter": 4950, "lr": 0.00097, "memory": 12604, "data_time": 0.58362, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.20221}
{"mode": "train", "epoch": 358, "iter": 5000, "lr": 0.00097, "memory": 12604, "data_time": 0.7796, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.32732}
{"mode": "train", "epoch": 361, "iter": 5050, "lr": 0.00097, "memory": 12604, "data_time": 0.69836, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.33189}
{"mode": "train", "epoch": 365, "iter": 5100, "lr": 0.00097, "memory": 12604, "data_time": 0.45714, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.05904}
{"mode": "train", "epoch": 368, "iter": 5150, "lr": 0.00097, "memory": 12604, "data_time": 0.42557, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.25898}
{"mode": "train", "epoch": 372, "iter": 5200, "lr": 0.00097, "memory": 12604, "data_time": 0.85478, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.31432}
{"mode": "train", "epoch": 375, "iter": 5250, "lr": 0.00097, "memory": 12604, "data_time": 0.53673, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 1.97333}
{"mode": "train", "epoch": 379, "iter": 5300, "lr": 0.00097, "memory": 12604, "data_time": 1.02989, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.17838}
{"mode": "train", "epoch": 383, "iter": 5350, "lr": 0.00097, "memory": 12604, "data_time": 0.59685, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.10168}
{"mode": "train", "epoch": 386, "iter": 5400, "lr": 0.00097, "memory": 12604, "data_time": 0.31949, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.00628}
{"mode": "train", "epoch": 390, "iter": 5450, "lr": 0.00097, "memory": 12604, "data_time": 1.28207, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.34271}
{"mode": "train", "epoch": 393, "iter": 5500, "lr": 0.00097, "memory": 12604, "data_time": 0.62157, "decode.loss_ce": 7e-05, "decode.acc_seg": 100.0, "loss": 7e-05, "time": 2.26449}
{"mode": "train", "epoch": 397, "iter": 5550, "lr": 0.00097, "memory": 12604, "data_time": 0.65777, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.21314}
{"mode": "train", "epoch": 400, "iter": 5600, "lr": 0.00097, "memory": 12604, "data_time": 0.544, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.04548}
{"mode": "train", "epoch": 404, "iter": 5650, "lr": 0.00097, "memory": 12604, "data_time": 0.61332, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.38014}
{"mode": "train", "epoch": 408, "iter": 5700, "lr": 0.00097, "memory": 12604, "data_time": 0.71011, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.12595}
{"mode": "train", "epoch": 411, "iter": 5750, "lr": 0.00097, "memory": 12604, "data_time": 0.54467, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.30728}
{"mode": "train", "epoch": 415, "iter": 5800, "lr": 0.00097, "memory": 12604, "data_time": 0.89306, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.28171}
{"mode": "train", "epoch": 418, "iter": 5850, "lr": 0.00097, "memory": 12604, "data_time": 0.61356, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.10791}
{"mode": "train", "epoch": 422, "iter": 5900, "lr": 0.00097, "memory": 12604, "data_time": 0.7881, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.24386}
{"mode": "train", "epoch": 425, "iter": 5950, "lr": 0.00097, "memory": 12604, "data_time": 0.76363, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.13818}
{"mode": "train", "epoch": 429, "iter": 6000, "lr": 0.00097, "memory": 12604, "data_time": 0.90909, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.28006}
{"mode": "train", "epoch": 433, "iter": 6050, "lr": 0.00097, "memory": 12604, "data_time": 0.93246, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.17846}
{"mode": "train", "epoch": 436, "iter": 6100, "lr": 0.00097, "memory": 12604, "data_time": 0.71759, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.10964}
{"mode": "train", "epoch": 440, "iter": 6150, "lr": 0.00097, "memory": 12604, "data_time": 0.90747, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.43475}
{"mode": "train", "epoch": 443, "iter": 6200, "lr": 0.00097, "memory": 12604, "data_time": 0.6435, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.03775}
{"mode": "train", "epoch": 447, "iter": 6250, "lr": 0.00097, "memory": 12604, "data_time": 0.79792, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.11409}
{"mode": "train", "epoch": 450, "iter": 6300, "lr": 0.00097, "memory": 12604, "data_time": 0.5484, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.16375}
{"mode": "train", "epoch": 454, "iter": 6350, "lr": 0.00097, "memory": 12604, "data_time": 0.60475, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.30637}
{"mode": "train", "epoch": 458, "iter": 6400, "lr": 0.00097, "memory": 12604, "data_time": 1.11348, "decode.loss_ce": 6e-05, "decode.acc_seg": 100.0, "loss": 6e-05, "time": 2.24638}
{"mode": "train", "epoch": 461, "iter": 6450, "lr": 0.00097, "memory": 12604, "data_time": 0.50044, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.11757}
{"mode": "train", "epoch": 465, "iter": 6500, "lr": 0.00097, "memory": 12604, "data_time": 0.93458, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.11198}
{"mode": "train", "epoch": 468, "iter": 6550, "lr": 0.00097, "memory": 12604, "data_time": 0.9722, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 1.91651}
{"mode": "train", "epoch": 472, "iter": 6600, "lr": 0.00097, "memory": 12604, "data_time": 0.83373, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.39269}
{"mode": "train", "epoch": 475, "iter": 6650, "lr": 0.00097, "memory": 12604, "data_time": 0.29323, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.01715}
{"mode": "train", "epoch": 479, "iter": 6700, "lr": 0.00097, "memory": 12604, "data_time": 0.98539, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.30433}
{"mode": "train", "epoch": 483, "iter": 6750, "lr": 0.00097, "memory": 12604, "data_time": 0.77367, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.31911}
{"mode": "train", "epoch": 486, "iter": 6800, "lr": 0.00097, "memory": 12604, "data_time": 0.89822, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 1.98456}
{"mode": "train", "epoch": 490, "iter": 6850, "lr": 0.00097, "memory": 12604, "data_time": 0.75494, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.27434}
{"mode": "train", "epoch": 493, "iter": 6900, "lr": 0.00096, "memory": 12604, "data_time": 0.68508, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.16598}
{"mode": "train", "epoch": 497, "iter": 6950, "lr": 0.00096, "memory": 12604, "data_time": 0.64791, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.23359}
{"mode": "train", "epoch": 500, "iter": 7000, "lr": 0.00096, "memory": 12604, "data_time": 0.39105, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.0586}
{"mode": "train", "epoch": 504, "iter": 7050, "lr": 0.00096, "memory": 12604, "data_time": 0.47965, "decode.loss_ce": 5e-05, "decode.acc_seg": 100.0, "loss": 5e-05, "time": 2.04709}

And it seems to be a correct json file.

Output in the terminal is only:

plot curve of logs/20230118_181102.log.json, metric is loss
@lucas-sancere lucas-sancere changed the title Plotting trainnig curves leads to Matplotlib no responding Plotting trainnig curves leads to Matplotlib not responding error Jan 23, 2023
@MengzhangLI
Copy link
Contributor

Hi, Lucas,

I checked your command and did not find sth wrong yet.

BTW, does it happen because of extremely low value like "loss": 5e-05 in your file? I think you can use logfile.zip which used in #1428 to check.

Looking forward to your reply.

@lucas-sancere
Copy link
Author

lucas-sancere commented Jan 25, 2023

Hi, thank you for your help!

When I use the logfile.zip, first I see that the json file is not structured the same as mine (only numbers) and then second, while running:

python tools/analyze_logs.py logfile/20220327_155405.log.json

I ended up with:

Traceback (most recent call last):
  File "tools/analyze_logs.py", line 129, in <module>
    main()
  File "tools/analyze_logs.py", line 124, in main
    log_dicts = load_json_logs(json_logs)
  File "tools/analyze_logs.py", line 107, in load_json_logs
    log = json.loads(line.strip())
  File "/home/lsancere/anaconda3/envs/mmsegmentation2/lib/python3.8/json/__init__.py", line 357, in loads
    return _default_decoder.decode(s)
  File "/home/lsancere/anaconda3/envs/mmsegmentation2/lib/python3.8/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/home/lsancere/anaconda3/envs/mmsegmentation2/lib/python3.8/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

The json file from the zip file is maybe corrupted?

@lucas-sancere
Copy link
Author

lucas-sancere commented Jan 25, 2023

There is a problem during extraction or zipping... Maybe you can copy the json file here for me to test it?

Thank you

@MeowZheng
Copy link
Collaborator

I think you can save the picture or change backend of matplotlib.

Please use the --out to save the pic

parser.add_argument('--out', type=str, default=None)

or use the --backend to change the backend, and there are more details about matplotlib backend
https://matplotlib.org/stable/users/explain/backends.html

I think saving the picture is a better way to solve your problem and the backend depends on the os of your platform

@MengzhangLI
Copy link
Contributor

Hi, @lucas-sancere sorry for late reply.

I think the command:

python tools/analyze_logs.py logs/20230118_181102.log.json --keys loss --legend loss

is ok because you got print plot curve of logs/20230118_181102.log.json, metric is loss in terminal and no error raised. So the problem perhaps lies in these lines.

Could you use example like this one to check plt.show() first? If it has a problem, then you can save the picture by --out or switch backend using --backend.

Best,

@lucas-sancere
Copy link
Author

Hi, thank you both for your answers.

I tried saving with --out argument:

(mmsegmentation2) lsancere@LucasLabtopBozekLab:~/These/CMMC/Codes/mmsegmentation_local_nocommits$ python tools/analyze_logs.py logs/20230126_110722.log.json --keys loss --legend loss --out ./test.png
plot curve of logs/20230126_110722.log.json, metric is loss
save curve to: ./test.png

But then the process run forever without ending and I have to kill it (no image is saved).

My matplotib backend is QtAgg (known with matplotlib.get_backend() ) and when I run

(mmsegmentation2) lsancere@LucasLabtopBozekLab:~/These/CMMC/Codes/mmsegmentation_local_nocommits$ python tools/analyze_logs.py logs/20230118_181102.log.json --keys loss --legend loss --backend QtAgg
plot curve of logs/20230118_181102.log.json, metric is loss

I also have the process running and "Matplotlib not responding" as before. I tried with other json logs as well.

Thanks!

@lucas-sancere
Copy link
Author

lucas-sancere commented Mar 2, 2023

Hi @MengzhangLI and @MeowZheng ,

Thank you for your help, I finally figured out (kind of) what was happening.

The problem was linked to my mmsegmentation conda env. Some package were preventing matplotlib backend to work properly (even after trying different backends). Even after uninstalling and re-installing seaborn, matplotlib and pyparsing I still had the issue. Nevertheless, when trying with another conda env, the scripts works fine.

Here is the list of packages that were inside my mmsegmentation conda env:

(mmsegmentation) lsancere@LucasLabtopBozekLab:~$ conda list
# packages in environment at /home/lsancere/anaconda3/envs/mmsegmentation:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                       2_gnu    conda-forge
addict                    2.4.0                    pypi_0    pypi
blas                      1.0                         mkl    conda-forge
brotlipy                  0.7.0           py38h0a891b7_1005    conda-forge
bzip2                     1.0.8                h7f98852_4    conda-forge
ca-certificates           2022.12.7            ha878542_0    conda-forge
certifi                   2022.12.7          pyhd8ed1ab_0    conda-forge
cffi                      1.15.1           py38h4a40e3a_3    conda-forge
charset-normalizer        2.1.1              pyhd8ed1ab_0    conda-forge
click                     8.1.3                    pypi_0    pypi
colorama                  0.4.6                    pypi_0    pypi
commonmark                0.9.1                    pypi_0    pypi
contourpy                 1.0.7                    pypi_0    pypi
cryptography              39.0.0           py38h3d167d9_0    conda-forge
cuda                      11.6.1                        0    nvidia
cuda-cccl                 11.6.55              hf6102b2_0    nvidia
cuda-command-line-tools   11.6.2                        0    nvidia
cuda-compiler             11.6.2                        0    nvidia
cuda-cudart               11.6.55              he381448_0    nvidia
cuda-cudart-dev           11.6.55              h42ad0f4_0    nvidia
cuda-cuobjdump            11.6.124             h2eeebcb_0    nvidia
cuda-cupti                11.6.124             h86345e5_0    nvidia
cuda-cuxxfilt             11.6.124             hecbf4f6_0    nvidia
cuda-driver-dev           11.6.55                       0    nvidia
cuda-gdb                  12.0.90                       0    nvidia
cuda-libraries            11.6.1                        0    nvidia
cuda-libraries-dev        11.6.1                        0    nvidia
cuda-memcheck             11.8.86                       0    nvidia
cuda-nsight               12.0.78                       0    nvidia
cuda-nsight-compute       12.0.0                        0    nvidia
cuda-nvcc                 11.6.124             hbba6d2d_0    nvidia
cuda-nvdisasm             12.0.76                       0    nvidia
cuda-nvml-dev             11.6.55              haa9ef22_0    nvidia
cuda-nvprof               12.0.90                       0    nvidia
cuda-nvprune              11.6.124             he22ec0a_0    nvidia
cuda-nvrtc                11.6.124             h020bade_0    nvidia
cuda-nvrtc-dev            11.6.124             h249d397_0    nvidia
cuda-nvtx                 11.6.124             h0630a44_0    nvidia
cuda-nvvp                 12.0.90                       0    nvidia
cuda-runtime              11.6.1                        0    nvidia
cuda-samples              11.6.101             h8efea70_0    nvidia
cuda-sanitizer-api        12.0.90                       0    nvidia
cuda-toolkit              11.6.1                        0    nvidia
cuda-tools                11.6.1                        0    nvidia
cuda-visual-tools         11.6.1                        0    nvidia
ffmpeg                    4.3                  hf484d3e_0    pytorch
fonttools                 4.38.0                   pypi_0    pypi
freetype                  2.12.1               hca18f0e_1    conda-forge
gds-tools                 1.5.0.59                      0    nvidia
gmp                       6.2.1                h58526e2_0    conda-forge
gnutls                    3.6.13               h85f3911_1    conda-forge
idna                      3.4                pyhd8ed1ab_0    conda-forge
importlib-metadata        6.0.0                    pypi_0    pypi
intel-openmp              2021.4.0          h06a4308_3561  
jpeg                      9e                   h166bdaf_2    conda-forge
kiwisolver                1.4.4                    pypi_0    pypi
lame                      3.100             h166bdaf_1003    conda-forge
lcms2                     2.14                 hfd0df8a_1    conda-forge
ld_impl_linux-64          2.39                 hcc3a1bd_1    conda-forge
lerc                      4.0.0                h27087fc_0    conda-forge
libcublas                 11.9.2.110           h5e84587_0    nvidia
libcublas-dev             11.9.2.110           h5c901ab_0    nvidia
libcufft                  10.7.1.112           hf425ae0_0    nvidia
libcufft-dev              10.7.1.112           ha5ce4c0_0    nvidia
libcufile                 1.5.0.59                      0    nvidia
libcufile-dev             1.5.0.59                      0    nvidia
libcurand                 10.3.1.50                     0    nvidia
libcurand-dev             10.3.1.50                     0    nvidia
libcusolver               11.3.4.124           h33c3c4e_0    nvidia
libcusparse               11.7.2.124           h7538f96_0    nvidia
libcusparse-dev           11.7.2.124           hbbe9722_0    nvidia
libdeflate                1.17                 h0b41bf4_0    conda-forge
libffi                    3.4.2                h7f98852_5    conda-forge
libgcc-ng                 12.2.0              h65d4601_19    conda-forge
libgomp                   12.2.0              h65d4601_19    conda-forge
libiconv                  1.17                 h166bdaf_0    conda-forge
libjpeg-turbo             2.1.4                h166bdaf_0    conda-forge
libnpp                    11.6.3.124           hd2722f0_0    nvidia
libnpp-dev                11.6.3.124           h3c42840_0    nvidia
libnsl                    2.0.0                h7f98852_0    conda-forge
libnvjpeg                 11.6.2.124           hd473ad6_0    nvidia
libnvjpeg-dev             11.6.2.124           hb5906b9_0    nvidia
libpng                    1.6.39               h753d276_0    conda-forge
libsqlite                 3.40.0               h753d276_0    conda-forge
libstdcxx-ng              12.2.0              h46fd767_19    conda-forge
libtiff                   4.5.0                h6adf6a1_2    conda-forge
libuuid                   2.32.1            h7f98852_1000    conda-forge
libwebp-base              1.2.4                h166bdaf_0    conda-forge
libxcb                    1.13              h7f98852_1004    conda-forge
libzlib                   1.2.13               h166bdaf_4    conda-forge
markdown                  3.4.1                    pypi_0    pypi
matplotlib                3.6.3                    pypi_0    pypi
mkl                       2021.4.0           h06a4308_640  
mkl-service               2.4.0            py38h95df7f1_0    conda-forge
mkl_fft                   1.3.1            py38h8666266_1    conda-forge
mkl_random                1.2.2            py38h1abd341_0    conda-forge
mmcls                     0.25.0                   pypi_0    pypi
mmcv-full                 1.7.1                    pypi_0    pypi
model-index               0.1.11                   pypi_0    pypi
ncurses                   6.3                  h27087fc_1    conda-forge
nettle                    3.6                  he412f7d_0    conda-forge
nsight-compute            2022.4.0.15                   0    nvidia
numpy                     1.23.5           py38h14f4228_0  
numpy-base                1.23.5           py38h31eccc5_0  
opencv-python             4.7.0.68                 pypi_0    pypi
openh264                  2.1.1                h780b84a_0    conda-forge
openjpeg                  2.5.0                hfec8fc6_2    conda-forge
openmim                   0.3.4                    pypi_0    pypi
openssl                   3.0.7                h0b41bf4_1    conda-forge
ordered-set               4.1.0                    pypi_0    pypi
packaging                 23.0                     pypi_0    pypi
pandas                    1.5.2                    pypi_0    pypi
pillow                    9.4.0            py38hb32c036_0    conda-forge
pip                       22.3.1             pyhd8ed1ab_0    conda-forge
prettytable               3.6.0                    pypi_0    pypi
pthread-stubs             0.4               h36c2ea0_1001    conda-forge
pycparser                 2.21               pyhd8ed1ab_0    conda-forge
pygments                  2.14.0                   pypi_0    pypi
pyopenssl                 23.0.0             pyhd8ed1ab_0    conda-forge
pyparsing                 3.0.9                    pypi_0    pypi
pyqt5                     5.15.7                   pypi_0    pypi
pyqt5-qt5                 5.15.2                   pypi_0    pypi
pyqt5-sip                 12.11.0                  pypi_0    pypi
pysocks                   1.7.1              pyha2e5f31_6    conda-forge
python                    3.8.15          h4a9ceb5_0_cpython    conda-forge
python-dateutil           2.8.2                    pypi_0    pypi
python_abi                3.8                      3_cp38    conda-forge
pytorch                   1.13.1          py3.8_cuda11.6_cudnn8.3.2_0    pytorch
pytorch-cuda              11.6                 h867d48c_1    pytorch
pytorch-mutex             1.0                        cuda    pytorch
pytz                      2022.7.1                 pypi_0    pypi
pyyaml                    6.0                      pypi_0    pypi
readline                  8.1.2                h0f457ee_0    conda-forge
requests                  2.28.2             pyhd8ed1ab_0    conda-forge
rich                      13.1.0                   pypi_0    pypi
seaborn                   0.12.2                   pypi_0    pypi
setuptools                65.6.3             pyhd8ed1ab_0    conda-forge
six                       1.16.0             pyh6c4a22f_0    conda-forge
tabulate                  0.9.0                    pypi_0    pypi
tk                        8.6.12               h27826a3_0    conda-forge
torchaudio                0.13.1               py38_cu116    pytorch
torchvision               0.14.1               py38_cu116    pytorch
typing_extensions         4.4.0              pyha770c72_0    conda-forge
urllib3                   1.26.14            pyhd8ed1ab_0    conda-forge
wcwidth                   0.2.6                    pypi_0    pypi
wheel                     0.38.4             pyhd8ed1ab_0    conda-forge
xorg-libxau               1.0.9                h7f98852_0    conda-forge
xorg-libxdmcp             1.1.3                h7f98852_0    conda-forge
xz                        5.2.6                h166bdaf_0    conda-forge
yapf                      0.32.0                   pypi_0    pypi
zipp                      3.11.0                   pypi_0    pypi
zlib                      1.2.13               h166bdaf_4    conda-forge
zstd                      1.5.2                h6239696_4    conda-forge

So, this list contains package versioning and dependencies issues that prevent matplotlib backend to have a normal behavior. As I didn't have any output when running the script, I didn't investigate which packages and versions are responsible for the issue specifically.
The discussion contains now all the information needed if someone wants to replicate the issue, knowing that the OS was
Ubuntu 20.04.5 LTS x86_64.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants