Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core dumped #7

Open
BingyuanZhang opened this issue Jan 23, 2025 · 0 comments
Open

core dumped #7

BingyuanZhang opened this issue Jan 23, 2025 · 0 comments

Comments

@BingyuanZhang
Copy link

When running the evaluation script, a floating - point exception (Floating point exception(core dumped)) occurred. Additionally, warnings related to CUDA graphs were displayed during the execution. The specific log is as follows:

INFO 01 - 23 22:03:50 model_runner.py:980] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce - eager' in the CLI.
INFO 01 - 23 22:03:50 model_runner.py:984] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing gpu_memory_utilization or enforcing eager mode. You can also reduce the max_num_seqs as needed to decrease memory usage.
eval.sh: line 8: 158893 Floating point exception(core dumped) CUDA_VISIBLE_DEVICES = 0 python pipeline/gen.py --gen_save_path "data/res/250123llama318b.jsonl" --model_name_or_path "./Llama3 - 8B_0122/global_step_3411" --datasets "math/test" "gsm8k/test" --max_new_toks 2048 --temperature 0 --prompt_template "cot" --n_shots - 1 --inf_seed - 1 --max_n_trials 1

Could you help me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant