Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BLEU score throws error when the prediction contains a white space #2095

Closed
bikcrum opened this issue Mar 6, 2023 · 3 comments
Closed

BLEU score throws error when the prediction contains a white space #2095

bikcrum opened this issue Mar 6, 2023 · 3 comments

Comments

@bikcrum
Copy link

bikcrum commented Mar 6, 2023

🐛 Bug

Describe the bug A clear and concise description of what the bug is.
I am using the package torchtext.data.metrics.bleu_score to compute the BLEU score by providing the predicted text as well as the ground truth text. However, I observed that when the predicted text contains space and is at least a length of 4, it throws an out-of-bound error.

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[563], line 1
----> 1 bleu_score(candidate_corpus=[['this', 'is','english', ' ']], 
      2            references_corpus=[['this','is','german', 'language']])

File ~/miniconda3/envs/ml/lib/python3.9/site-packages/torchtext/data/metrics.py:86, in bleu_score(candidate_corpus, references_corpus, max_n, weights)
     83         clipped_counts[len(ngram) - 1] += clipped_counter[ngram]
     85     for ngram in candidate_counter:  # TODO: no need to loop through the whole counter
---> 86         total_counts[len(ngram) - 1] += candidate_counter[ngram]
     88 if min(clipped_counts) == 0:
     89     return 0.0

IndexError: index 4 is out of bounds for dimension 0 with size 4

To Reproduce Steps to reproduce the behavior:

from torchtext.data.metrics import bleu_score

# This throws an error
bleu_score(candidate_corpus=[['this', 'is','english', ' ']], 
           references_corpus=[['this','is','german', 'language']])

Expected behavior
This should report the blue_score for the predicted text

Environment

PyTorch version: 1.9.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: CentOS Linux release 7.9.2009 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: 3.4.2 (tags/RELEASE_34/dot2-final)
CMake version: version 2.8.12.2
Libc version: glibc-2.17

Python version: 3.9.15 (main, Nov 24 2022, 14:31:59)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: 
GPU models and configuration: GPU 0: Tesla V100-SXM3-32GB
Nvidia driver version: 470.161.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                96
On-line CPU(s) list:   0-95
Thread(s) per core:    2
Core(s) per socket:    24
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
Stepping:              4
CPU MHz:               3206.414
CPU max MHz:           3700.0000
CPU min MHz:           1200.0000
BogoMIPS:              5400.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              33792K
NUMA node0 CPU(s):     0-23,48-71
NUMA node1 CPU(s):     24-47,72-95
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==1.9.0
[pip3] torchaudio==0.13.1
[pip3] torchtext==0.10.0
[pip3] torchvision==0.14.1
[conda] blas                      1.0                         mkl  
[conda] cudatoolkit               11.2.2              hbe64b41_10    conda-forge
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640  
[conda] mkl-service               2.4.0            py39h7f8727e_0  
[conda] mkl_fft                   1.3.1            py39hd3c417c_0  
[conda] mkl_random                1.2.2            py39h51133e4_0  
[conda] numpy                     1.24.1                   pypi_0    pypi
[conda] numpy-base                1.23.5           py39h31eccc5_0  
[conda] pytorch-cuda              11.7                 h67b0de4_1    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torch                     1.9.0                    pypi_0    pypi
[conda] torchaudio                0.13.1               py39_cu117    pytorch
[conda] torchtext                 0.10.0                   pypi_0    pypi
[conda] torchvision               0.14.1               py39_cu117    pytorch
torchtext version is  0.10.0
@joecummings
Copy link
Contributor

@bikcrum Can you update to the latest version of torchtext and let me know if you still have this error?

@bikcrum
Copy link
Author

bikcrum commented Mar 6, 2023

@bikcrum Can you update to the latest version of torchtext and let me know if you still have this error?

Turns out that there is no error from torchtext version 0.14. The issue seems to be fixed by the PR #1913 . Thank you!

@joecummings
Copy link
Contributor

Closing as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants