Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in generating .mokuro and .html files. #107

Open
hiderkee opened this issue Jul 11, 2024 · 4 comments
Open

Problem in generating .mokuro and .html files. #107

hiderkee opened this issue Jul 11, 2024 · 4 comments

Comments

@hiderkee
Copy link

hiderkee commented Jul 11, 2024

Lately, I have to reset my laptop and reinstall mokuro, following the steps on Lazy Guide (except for the CUDA step which is optional). However, I kept getting this error message: "RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory" and the _OCR folder appeared with nothing inside.

Before resetting, mokuro can generate .html files without any problem.

My input:
mokuro /Users/npk/Documents/Nana/v01

The following output:

`2024-07-11 20:48:47.899 | WARNING  | mokuro.run:run:55 - Legacy HTML output is deprecated and will not be further developed. It's recommended to use .mokuro format and web reader instead. Legacy HTML will be disabled by default in the future. To explicitly enable it, run with option --legacy-html.
2024-07-11 20:48:47.899 | INFO     | mokuro.run:run:63 - Scanning paths...

Found 1 volumes:

/Users/npk/Documents/Nana/v01 (unprocessed)

Each of the paths above will be treated as one volume.


Continue? [yes/no]yes
2024-07-11 20:48:57.252 | INFO     | mokuro.run:run:133 - Processing 1/1: /Users/npk/Documents/Nana/v01
Processing pages...:   0%|                              | 0/231 [00:00<?, ?it/s]2024-07-11 20:48:57.319 | INFO     | mokuro.manga_page_ocr:__init__:41 - Initializing text detector, using device cpu
Processing pages...:   0%|                              | 0/231 [00:00<?, ?it/s]
2024-07-11 20:48:57.380 | ERROR    | mokuro.run:run:142 - Error while processing /Users/npk/Documents/Nana/v01
Traceback (most recent call last):

  File "/Library/Frameworks/Python.framework/Versions/3.10/bin/mokuro", line 8, in <module>
    sys.exit(main())
    │   │    └ <function main at 0x10ace7010>
    │   └ <built-in function exit>
    └ <module 'sys' (built-in)>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/__main__.py", line 7, in main
    fire.Fire(run)
    │    │    └ <function run at 0x143bb8e50>
    │    └ <function Fire at 0x143b12710>
    └ <module 'fire' from '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/__init__.py'>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      │     │          │     │                 │        └ 'mokuro'
                      │     │          │     │                 └ {}
                      │     │          │     └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
                      │     │          └ ['/Users/npk/Documents/Nana/v01']
                      │     └ <function run at 0x143bb8e50>
                      └ <function _Fire at 0x143bb8820>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
    │                           └ <function _CallAndUpdateTrace at 0x143bb8940>
    └ <function run at 0x143bb8e50>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                │   │          └ {}
                │   └ ['/Users/npk/Documents/Nana/v01']
                └ <function run at 0x143bb8e50>
> File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/run.py", line 137, in run
    mg.process_volume(volume, ignore_errors=ignore_errors, no_cache=no_cache)
    │  │              │                     │                       └ False
    │  │              │                     └ False
    │  │              └ <mokuro.volume.Volume object at 0x143bdc130>
    │  └ <function MokuroGenerator.process_volume at 0x143b120e0>
    └ <mokuro.mokuro_generator.MokuroGenerator object at 0x143bdc4f0>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/mokuro_generator.py", line 65, in process_volume
    raise e
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/mokuro_generator.py", line 57, in process_volume
    self.init_models()
    │    └ <function MokuroGenerator.init_models at 0x143b12050>
    └ <mokuro.mokuro_generator.MokuroGenerator object at 0x143bdc4f0>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/mokuro_generator.py", line 24, in init_models
    self.mpocr = MangaPageOcr(
    │    │       └ <class 'mokuro.manga_page_ocr.MangaPageOcr'>
    │    └ None
    └ <mokuro.mokuro_generator.MokuroGenerator object at 0x143bdc4f0>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/manga_page_ocr.py", line 42, in __init__
    self.text_detector = TextDetector(
    │                    └ <class 'comic_text_detector.inference.TextDetector'>
    └ <mokuro.manga_page_ocr.MangaPageOcr object at 0x143c444c0>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/comic_text_detector/inference.py", line 131, in __init__
    self.net = TextDetBase(model_path, device=device, act=act)
    │          │           │                  │           └ 'leaky'
    │          │           │                  └ 'cpu'
    │          │           └ PosixPath('/Users/npk/.cache/manga-ocr/comictextdetector.pt')
    │          └ <class 'comic_text_detector.basemodel.TextDetBase'>
    └ <comic_text_detector.inference.TextDetector object at 0x143c443d0>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/comic_text_detector/basemodel.py", line 225, in __init__
    self.blk_det, self.text_seg, self.text_det = get_base_det_models(model_path, device, half, act=act)
    │             │              │               │                   │           │       │         └ 'leaky'
    │             │              │               │                   │           │       └ False
    │             │              │               │                   │           └ 'cpu'
    │             │              │               │                   └ PosixPath('/Users/npk/.cache/manga-ocr/comictextdetector.pt')
    │             │              │               └ <function get_base_det_models at 0x13fded480>
    │             │              └ TextDetBase()
    │             └ TextDetBase()
    └ TextDetBase()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/comic_text_detector/basemodel.py", line 212, in get_base_det_models
    textdetector_dict = torch.load(model_path, map_location=device)
                        │     │    │                        └ 'cpu'
                        │     │    └ PosixPath('/Users/npk/.cache/manga-ocr/comictextdetector.pt')
                        │     └ <function load at 0x13cd36170>
                        └ <module 'torch' from '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/__init__.py'>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 1005, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
         │                    └ <_io.BufferedReader name='/Users/npk/.cache/manga-ocr/comictextdetector.pt'>
         └ <class 'torch.serialization._open_zipfile_reader'>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/serialization.py", line 457, in __init__
    super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
                     │     │  │                 └ <_io.BufferedReader name='/Users/npk/.cache/manga-ocr/comictextdetector.pt'>
                     │     │  └ <class 'torch.PyTorchFileReader'>
                     │     └ <module 'torch._C' from '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/_C.cpython-310...
                     └ <module 'torch' from '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/__init__.py'>

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
2024-07-11 20:48:57.385 | INFO     | mokuro.run:run:146 - Processed successfully: 0/1`

I use:
MacOS Monterey 12.7.5,
Python 3.10.11,
Pip 23.0.1,
And just install mokuro v0.2.1.

I hope to find a solution to this issue soon. I am not very tech-savvy, but I will do my best to understand. Thank you in advance.

@kha-white
Copy link
Owner

Seems like the model file is corrupted. Try downloading it manually from: https://github.com/zyddnys/manga-image-translator/releases/download/beta-0.2.1/comictextdetector.pt

and replace this file: /Users/npk/.cache/manga-ocr/comictextdetector.pt

@hiderkee
Copy link
Author

Thank you very much @kha-white!!
I replaced the file you mentioned. It worked at first, however, another error appeared. It looks like this:

My input:
mokuro /Users/npk/Documents/Nana/v01

The following output:

2024-07-15 14:46:32.894 | WARNING  | mokuro.run:run:55 - Legacy HTML output is deprecated and will not be further developed. It's recommended to use .mokuro format and web reader instead. Legacy HTML will be disabled by default in the future. To explicitly enable it, run with option --legacy-html.
2024-07-15 14:46:32.895 | INFO     | mokuro.run:run:63 - Scanning paths...

Found 1 volumes:

/Users/npk/Documents/Nana/v01 (unprocessed)

Each of the paths above will be treated as one volume.


Continue? [yes/no]yes
2024-07-15 14:46:34.862 | INFO     | mokuro.run:run:133 - Processing 1/1: /Users/npk/Documents/Nana/v01
Processing pages...:   0%|                              | 0/231 [00:00<?, ?it/s]2024-07-15 14:46:34.962 | INFO     | mokuro.manga_page_ocr:__init__:41 - Initializing text detector, using device cpu
2024-07-15 14:46:35.380 | INFO     | manga_ocr.ocr:__init__:15 - Loading OCR model from kha-white/manga-ocr-base
preprocessor_config.json: 100%|████████████████| 228/228 [00:00<00:00, 1.18MB/s]
tokenizer_config.json: 100%|███████████████████| 486/486 [00:00<00:00, 2.56MB/s]
vocab.txt: 100%|███████████████████████████| 24.1k/24.1k [00:00<00:00, 21.9MB/s]
special_tokens_map.json: 100%|██████████████████| 112/112 [00:00<00:00, 593kB/s]
config.json: 100%|██████████████████████████| 77.5k/77.5k [00:00<00:00, 323kB/s]
pytorch_model.bin: 100%|█████████████████████| 444M/444M [00:44<00:00, 9.99MB/s]
2024-07-15 14:47:29.938 | INFO     | manga_ocr.ocr:__init__:28 - Using MPS2MB/s]
Processing pages...:   0%|                              | 0/231 [00:56<?, ?it/s]
2024-07-15 14:47:31.361 | ERROR    | mokuro.run:run:142 - Error while processing /Users/npk/Documents/Nana/v01
Traceback (most recent call last):

  File "/Library/Frameworks/Python.framework/Versions/3.10/bin/mokuro", line 8, in <module>
    sys.exit(main())
    │   │    └ <function main at 0x10b597010>
    │   └ <built-in function exit>
    └ <module 'sys' (built-in)>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/__main__.py", line 7, in main
    fire.Fire(run)
    │    │    └ <function run at 0x1444cce50>
    │    └ <function Fire at 0x144426710>
    └ <module 'fire' from '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/__init__.py'>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      │     │          │     │                 │        └ 'mokuro'
                      │     │          │     │                 └ {}
                      │     │          │     └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
                      │     │          └ ['/Users/npk/Documents/Nana/v01']
                      │     └ <function run at 0x1444cce50>
                      └ <function _Fire at 0x1444cc820>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
    │                           └ <function _CallAndUpdateTrace at 0x1444cc940>
    └ <function run at 0x1444cce50>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                │   │          └ {}
                │   └ ['/Users/npk/Documents/Nana/v01']
                └ <function run at 0x1444cce50>
> File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/run.py", line 137, in run
    mg.process_volume(volume, ignore_errors=ignore_errors, no_cache=no_cache)
    │  │              │                     │                       └ False
    │  │              │                     └ False
    │  │              └ <mokuro.volume.Volume object at 0x1444f8130>
    │  └ <function MokuroGenerator.process_volume at 0x1444260e0>
    └ <mokuro.mokuro_generator.MokuroGenerator object at 0x1444f8430>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/mokuro_generator.py", line 65, in process_volume
    raise e
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/mokuro_generator.py", line 57, in process_volume
    self.init_models()
    │    └ <function MokuroGenerator.init_models at 0x144426050>
    └ <mokuro.mokuro_generator.MokuroGenerator object at 0x1444f8430>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/mokuro_generator.py", line 24, in init_models
    self.mpocr = MangaPageOcr(
    │    │       └ <class 'mokuro.manga_page_ocr.MangaPageOcr'>
    │    └ None
    └ <mokuro.mokuro_generator.MokuroGenerator object at 0x1444f8430>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/mokuro/manga_page_ocr.py", line 45, in __init__
    self.mocr = MangaOcr(pretrained_model_name_or_path, force_cpu)
    │           │        │                              └ False
    │           │        └ 'kha-white/manga-ocr-base'
    │           └ <class 'manga_ocr.ocr.MangaOcr'>
    └ <mokuro.manga_page_ocr.MangaPageOcr object at 0x14455c460>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manga_ocr/ocr.py", line 36, in __init__
    self(example_path)
    │    └ PosixPath('/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manga_ocr/assets/example.jpg')
    └ <manga_ocr.ocr.MangaOcr object at 0x14459e980>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manga_ocr/ocr.py", line 53, in __call__
    x = self.model.generate(x[None].to(self.model.device), max_length=300)[0].cpu()
        │    │     │        │          │    │     └ <property object at 0x14439d2b0>
        │    │     │        │          │    └ VisionEncoderDecoderModel(
        │    │     │        │          │        (encoder): ViTModel(
        │    │     │        │          │          (embeddings): ViTEmbeddings(
        │    │     │        │          │            (patch_embeddings): ViTPatchEmbeddin...
        │    │     │        │          └ <manga_ocr.ocr.MangaOcr object at 0x14459e980>
        │    │     │        └ tensor([[[-0.6471, -0.9373, -0.8824,  ...,  0.1059,  0.1059,  0.0980],
        │    │     │                   [-0.8431, -0.8745, -0.1373,  ...,  0.1059,  0...
        │    │     └ <function GenerationMixin.generate at 0x144352710>
        │    └ VisionEncoderDecoderModel(
        │        (encoder): ViTModel(
        │          (embeddings): ViTEmbeddings(
        │            (patch_embeddings): ViTPatchEmbeddin...
        └ <manga_ocr.ocr.MangaOcr object at 0x14459e980>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           │     │       └ {'max_length': 300}
           │     └ (VisionEncoderDecoderModel(
           │         (encoder): ViTModel(
           │           (embeddings): ViTEmbeddings(
           │             (patch_embeddings): ViTPatchEmbeddi...
           └ <function GenerationMixin.generate at 0x144352680>
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1664, in generate
    self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)
    │    │                       │                  │                                 └ device(type='mps', index=0)
    │    │                       │                  └ False
    │    │                       └ GenerationConfig {
    │    │                           "decoder_start_token_id": 2,
    │    │                           "early_stopping": true,
    │    │                           "eos_token_id": 3,
    │    │                           "length_penalty": 2.0,
    │    │                           "...
    │    └ <function GenerationMixin._prepare_special_tokens at 0x1443525f0>
    └ VisionEncoderDecoderModel(
        (encoder): ViTModel(
          (embeddings): ViTEmbeddings(
            (patch_embeddings): ViTPatchEmbeddin...
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1513, in _prepare_special_tokens
    if eos_token_id is not None and torch.isin(elements=eos_token_id, test_elements=pad_token_id).any():
       │                            │     │             │                           └ tensor(0, device='mps:0')
       │                            │     │             └ tensor([3], device='mps:0')
       │                            │     └ <built-in method isin of type object at 0x13f42b150>
       │                            └ <module 'torch' from '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/torch/__init__.py'>
       └ tensor([3], device='mps:0')

NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
2024-07-15 14:47:33.901 | INFO     | mokuro.run:run:146 - Processed successfully: 0/1

I hope it won't bother you since I have no clue in where or why the problems occur. If it is possible, could I try downloading the older version of mokuro?

@kha-white
Copy link
Owner

Older version mokuro is unlikely to help, seems like issue was introduced by this change in transformers library and patch in Pytorch was needed, see pytorch/pytorch#124518

You might check if nightly Pytorch build fixes this for you, or try running PYTORCH_ENABLE_MPS_FALLBACK=1 in console before running mokuro, but this might be a bit slower than running fully on MPS.

@Tigy01
Copy link

Tigy01 commented Jul 19, 2024

I am also currently experiencing this on Fedora with python 3.12.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants