Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalidate trace cache @ step 0 and module 740: cache has only 0 modules #119

Open
Some-random opened this issue Jan 29, 2025 · 6 comments

Comments

@Some-random
Copy link

Some-random commented Jan 29, 2025

I'm running the DRPO stage with this command (notice that I've changed the model from 7B QWen to a 1.5B model).
accelerate launch --config_file configs/zero3.yaml src/open_r1/grpo.py --output_dir Qwen2.5-Math-1.5B-Instruct-GRPO --model_name_or_path Qwen/Qwen2.5-Math-1.5B-Instruct --dataset_name AI-MO/NuminaMath-TIR --max_prompt_length 256 --per_device_train_batch_size 1 --gradient_accumulation_steps 16 --logging_steps 10 --bf16

During training, this error message keeps poping up in the log, and my evaluation loss stays at 0 (which is obviously a problem).
Invalidate trace cache @ step 0 and module 740: cache has only 0 modules

I know that this is an unsolved issue from DeepSpeed, I'm just trying to understand why this is happening to my particular setup and whether others have seen the same error message

@Some-random
Copy link
Author

It's weird. The error message disappeared after I switched to deepspeed stage 2. But the eval loss is still 0...

{'loss': 0.0, 'grad_norm': 0.0027486092876642942, 'learning_rate': 9.941107184923439e-07, 'completion_length': 251.3828125, 'rewards/accuracy_reward': 0.08955078125, 'rewards/format_reward': 0.0, 'reward': 0.08955078125, 'reward_std': 0.08234458500519395, 'kl': 1.7824722453951837e-05, 'epoch': 0.02}
{'loss': 0.0, 'grad_norm': 0.0029408170375972986, 'learning_rate': 9.882214369846878e-07, 'completion_length': 251.63369140625, 'rewards/accuracy_reward': 0.087890625, 'rewards/format_reward': 0.0, 'reward': 0.087890625, 'reward_std': 0.08372153064701707, 'kl': 2.3604952730238438e-05, 'epoch': 0.04}
{'loss': 0.0, 'grad_norm': 0.0026003336533904076, 'learning_rate': 9.823321554770318e-07, 'completion_length': 251.235546875, 'rewards/accuracy_reward': 0.0853515625, 'rewards/format_reward': 0.0, 'reward': 0.0853515625, 'reward_std': 0.0816438203677535, 'kl': 4.355926066637039e-05, 'epoch': 0.05}
  2%|▏         | 35/1698 [1:16:59<59:05:02, 127.90s/it]

@Ethereal-sakura
Copy link

I'm having the exact same issue, can anyone help with that?

@Jarvis-K
Copy link

same issue

@Some-random
Copy link
Author

It's weird. The error message disappeared after I switched to deepspeed stage 2. But the eval loss is still 0...

{'loss': 0.0, 'grad_norm': 0.0027486092876642942, 'learning_rate': 9.941107184923439e-07, 'completion_length': 251.3828125, 'rewards/accuracy_reward': 0.08955078125, 'rewards/format_reward': 0.0, 'reward': 0.08955078125, 'reward_std': 0.08234458500519395, 'kl': 1.7824722453951837e-05, 'epoch': 0.02}
{'loss': 0.0, 'grad_norm': 0.0029408170375972986, 'learning_rate': 9.882214369846878e-07, 'completion_length': 251.63369140625, 'rewards/accuracy_reward': 0.087890625, 'rewards/format_reward': 0.0, 'reward': 0.087890625, 'reward_std': 0.08372153064701707, 'kl': 2.3604952730238438e-05, 'epoch': 0.04}
{'loss': 0.0, 'grad_norm': 0.0026003336533904076, 'learning_rate': 9.823321554770318e-07, 'completion_length': 251.235546875, 'rewards/accuracy_reward': 0.0853515625, 'rewards/format_reward': 0.0, 'reward': 0.0853515625, 'reward_std': 0.0816438203677535, 'kl': 4.355926066637039e-05, 'epoch': 0.05}
  2%|▏         | 35/1698 [1:16:59<59:05:02, 127.90s/it]

Interestingly, the loss starts at 0 but increases as the training continues
{'loss': 0.0003, 'grad_norm': 0.013544703833758831, 'learning_rate': 9.17550058892815e-07, 'completion_length': 236.505078125, 'rewards/accuracy_reward': 0.19140625, 'rewards/format_reward': 0.0, 'reward': 0.19140625, 'reward_std': 0.166346223349683, 'kl': 0.008650064468383789, 'epoch': 0.25}

But I faced another error at the evaluation of epoch 0.35

File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 141, in <module>
    main(script_args, training_args, model_args)
  File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 130, in main
    trainer.train()
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
    return inner_training_loop(
           ^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 3675, in training_step
    loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 494, in compute_loss
    output_reward_func = reward_func(prompts=prompts, completions=completions, **reward_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 69, in accuracy_reward
    reward = float(verify(answer_parsed, gold_parsed))
                   ^^^^^^^^^^^^^^^^^^^^^^^
    return Complement.reduce(a, b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 1731, in reduce
    if B == S.UniversalSet or A.is_subset(B):
                              ^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 413, in is_subset
    ret = self._eval_is_subset(other)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2056, in _eval_is_subset
    return fuzzy_and(other._contains(e) for e in self.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/logic.py", line 142, in fuzzy_and
    for ai in args:
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2056, in <genexpr>
    return fuzzy_and(other._contains(e) for e in self.args)
                     ^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2053, in _contains
    return Or(*[Eq(e, other, evaluate=True) for e in self.args])
           ^^^^^^^^^^^^
[rank0]: Traceback (most recent call last):
[rank0]:   File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 141, in <module>
[rank0]:     main(script_args, training_args, model_args)
[rank0]:   File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 130, in main
[rank0]:     trainer.train()
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
[rank0]:     return inner_training_loop(
[rank0]:            ^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank0]:     tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]:                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 3675, in training_step
[rank0]:     loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 494, in compute_loss
[rank0]:     output_reward_func = reward_func(prompts=prompts, completions=completions, **reward_kwargs)
[rank0]:                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 69, in accuracy_reward
[rank0]:     reward = float(verify(answer_parsed, gold_parsed))
[rank0]:                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 262, in _symmetric_difference
[rank0]:     return Union(Complement(self, other), Complement(other, self))
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 1721, in __new__
[rank0]:     return Complement.reduce(a, b)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 1731, in reduce
[rank0]:     if B == S.UniversalSet or A.is_subset(B):
[rank0]:                               ^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 413, in is_subset
[rank0]:     ret = self._eval_is_subset(other)
[rank0]:           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2056, in _eval_is_subset
[rank0]:     return fuzzy_and(other._contains(e) for e in self.args)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/logic.py", line 142, in fuzzy_and
[rank0]:     for ai in args:
[rank0]:   File "/fsx/users/dongweij/minic
e 335, in canonical
[rank0]:     r = self.func(*args)
[rank0]:         ^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 852, in __new__
[rank0]:     return cls._eval_relation(lhs, rhs, **options)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 859, in _eval_relation
[rank0]:     val = cls._eval_fuzzy_relation(lhs, rhs)
[rank0]:           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 1186, in _eval_fuzzy_relation
[rank0]:     return is_lt(lhs, rhs)
[rank0]:            ^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 1265, in is_lt
[rank0]:     return fuzzy_not(is_ge(lhs, rhs, assumptions))
[rank0]:                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 1380, in is_ge
[rank0]:     raise TypeError("Can only compare inequalities with Expr")
[rank0]: TypeError: Can only compare inequalities with Expr

Can anyone help with that?

@zhyaoch
Copy link

zhyaoch commented Jan 30, 2025

It's weird. The error message disappeared after I switched to deepspeed stage 2. But the eval loss is still 0...这很奇怪。切换到 deepspeed stage 2 后,错误消息消失了。但是 eval 损失仍然是 0......

{'loss': 0.0, 'grad_norm': 0.0027486092876642942, 'learning_rate': 9.941107184923439e-07, 'completion_length': 251.3828125, 'rewards/accuracy_reward': 0.08955078125, 'rewards/format_reward': 0.0, 'reward': 0.08955078125, 'reward_std': 0.08234458500519395, 'kl': 1.7824722453951837e-05, 'epoch': 0.02}
{'loss': 0.0, 'grad_norm': 0.0029408170375972986, 'learning_rate': 9.882214369846878e-07, 'completion_length': 251.63369140625, 'rewards/accuracy_reward': 0.087890625, 'rewards/format_reward': 0.0, 'reward': 0.087890625, 'reward_std': 0.08372153064701707, 'kl': 2.3604952730238438e-05, 'epoch': 0.04}
{'loss': 0.0, 'grad_norm': 0.0026003336533904076, 'learning_rate': 9.823321554770318e-07, 'completion_length': 251.235546875, 'rewards/accuracy_reward': 0.0853515625, 'rewards/format_reward': 0.0, 'reward': 0.0853515625, 'reward_std': 0.0816438203677535, 'kl': 4.355926066637039e-05, 'epoch': 0.05}
  2%|▏         | 35/1698 [1:16:59<59:05:02, 127.90s/it]

Interestingly, the loss starts at 0 but increases as the training continues有趣的是,损失从 0 开始,但随着训练的继续而增加 {'loss': 0.0003, 'grad_norm': 0.013544703833758831, 'learning_rate': 9.17550058892815e-07, 'completion_length': 236.505078125, 'rewards/accuracy_reward': 0.19140625, 'rewards/format_reward': 0.0, 'reward': 0.19140625, 'reward_std': 0.166346223349683, 'kl': 0.008650064468383789, 'epoch': 0.25}

But I faced another error at the evaluation of epoch 0.35但我在 epoch 0.35 的评估中遇到了另一个错误

File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 141, in <module>
    main(script_args, training_args, model_args)
  File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 130, in main
    trainer.train()
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
    return inner_training_loop(
           ^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 3675, in training_step
    loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 494, in compute_loss
    output_reward_func = reward_func(prompts=prompts, completions=completions, **reward_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 69, in accuracy_reward
    reward = float(verify(answer_parsed, gold_parsed))
                   ^^^^^^^^^^^^^^^^^^^^^^^
    return Complement.reduce(a, b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 1731, in reduce
    if B == S.UniversalSet or A.is_subset(B):
                              ^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 413, in is_subset
    ret = self._eval_is_subset(other)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2056, in _eval_is_subset
    return fuzzy_and(other._contains(e) for e in self.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/logic.py", line 142, in fuzzy_and
    for ai in args:
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2056, in <genexpr>
    return fuzzy_and(other._contains(e) for e in self.args)
                     ^^^^^^^^^^^^^^^^^^
  File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2053, in _contains
    return Or(*[Eq(e, other, evaluate=True) for e in self.args])
           ^^^^^^^^^^^^
[rank0]: Traceback (most recent call last):
[rank0]:   File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 141, in <module>
[rank0]:     main(script_args, training_args, model_args)
[rank0]:   File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 130, in main
[rank0]:     trainer.train()
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
[rank0]:     return inner_training_loop(
[rank0]:            ^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
[rank0]:     tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]:                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/transformers/trainer.py", line 3675, in training_step
[rank0]:     loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 494, in compute_loss
[rank0]:     output_reward_func = reward_func(prompts=prompts, completions=completions, **reward_kwargs)
[rank0]:                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/open-r1/src/open_r1/grpo.py", line 69, in accuracy_reward
[rank0]:     reward = float(verify(answer_parsed, gold_parsed))
[rank0]:                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 262, in _symmetric_difference
[rank0]:     return Union(Complement(self, other), Complement(other, self))
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 1721, in __new__
[rank0]:     return Complement.reduce(a, b)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 1731, in reduce
[rank0]:     if B == S.UniversalSet or A.is_subset(B):
[rank0]:                               ^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 413, in is_subset
[rank0]:     ret = self._eval_is_subset(other)
[rank0]:           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/sets/sets.py", line 2056, in _eval_is_subset
[rank0]:     return fuzzy_and(other._contains(e) for e in self.args)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/logic.py", line 142, in fuzzy_and
[rank0]:     for ai in args:
[rank0]:   File "/fsx/users/dongweij/minic
e 335, in canonical
[rank0]:     r = self.func(*args)
[rank0]:         ^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 852, in __new__
[rank0]:     return cls._eval_relation(lhs, rhs, **options)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 859, in _eval_relation
[rank0]:     val = cls._eval_fuzzy_relation(lhs, rhs)
[rank0]:           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 1186, in _eval_fuzzy_relation
[rank0]:     return is_lt(lhs, rhs)
[rank0]:            ^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 1265, in is_lt
[rank0]:     return fuzzy_not(is_ge(lhs, rhs, assumptions))
[rank0]:                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/fsx/users/dongweij/miniconda3/envs/openr1/lib/python3.11/site-packages/sympy/core/relational.py", line 1380, in is_ge
[rank0]:     raise TypeError("Can only compare inequalities with Expr")
[rank0]: TypeError: Can only compare inequalities with Expr

Can anyone help with that?谁能帮忙呢?

I meet the same error with you in epoch 0.3

@Some-random
Copy link
Author

There was a new commit bumping math-verify to 0.3.3, I've tried upgrading math-verify and it seems to solve the "compare inequalities" issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants