Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wandb logging cannot be disabled #976

Open
rmakarovv opened this issue Dec 13, 2024 · 0 comments
Open

Wandb logging cannot be disabled #976

rmakarovv opened this issue Dec 13, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@rmakarovv
Copy link

Describe the bug
It is not possible to disable logging through wandb and tensorflow. While tensorflow is silent, wandb requires interaction, so it is impossible to automate the script due to this problem. The only workaround I found is using WANDB_DISABLED=true when running the script.

Expected behavior
Since llm-compressor uses HfArgumentParser under hood, one would expect that passing report_to="none" to the oneshot(...) call would disable all logging. However, the problem is there, and when I run the script (more in the "To Reproduce" section), I get the following prompt from the console:

wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 

Environment

  • Ubuntu 22.04.3
  • Python 3.10
  • List of all packages:
Package                           Version
--------------------------------- -------------
accelerate                        1.2.0
annotated-types                   0.7.0
clearml                           1.16.5
cloudpickle                       3.1.0
compressed-tensors                0.8.0
datasets                          3.1.0
diskcache                         5.6.3
fastapi                           0.115.6
gguf                              0.10.0
h11                               0.14.0
httpcore                          1.0.7
httptools                         0.6.4
httpx                             0.28.1
huggingface-hub                   0.26.5
importlib_metadata                8.5.0
interegular                       0.3.3
jiter                             0.8.2
lark                              1.2.2
llmcompressor                     0.3.0
lm-format-enforcer                0.10.9
mistral_common                    1.5.1
msgspec                           0.18.6
numpy                             1.26.4
nvidia-cublas-cu12                12.4.5.8
nvidia-cuda-cupti-cu12            12.4.127
nvidia-cuda-nvrtc-cu12            12.4.127
nvidia-cuda-runtime-cu12          12.4.127
nvidia-cudnn-cu12                 9.1.0.70
nvidia-cufft-cu12                 11.2.1.3
nvidia-curand-cu12                10.3.5.147
nvidia-cusolver-cu12              11.6.1.9
nvidia-cusparse-cu12              12.3.1.170
nvidia-ml-py                      12.560.30
nvidia-nccl-cu12                  2.21.5
nvidia-nvjitlink-cu12             12.4.127
nvidia-nvtx-cu12                  12.4.127
openai                            1.57.3
opencv-python-headless            4.10.0.84
outlines                          0.0.46
partial-json-parser               0.2.1.1.post4
pillow                            10.4.0
prometheus-fastapi-instrumentator 7.0.0
py-cpuinfo                        9.0.0
pyairports                        2.1.1
pycountry                         24.6.1
pydantic                          2.10.3
pydantic_core                     2.27.1
pynvml                            11.5.3
python-dotenv                     1.0.1
ray                               2.40.0
requests                          2.32.3
safetensors                       0.4.5
sentencepiece                     0.2.0
starlette                         0.41.3
sympy                             1.13.1
tiktoken                          0.7.0
tokenizers                        0.20.3
torch                             2.5.1
torchvision                       0.20.1
tqdm                              4.67.0
tqdm-multiprocess                 0.0.11
transformers                      4.46.3
triton                            3.1.0
typing_extensions                 4.12.2
uvicorn                           0.32.1
uvloop                            0.21.0
vllm                              0.6.4.post1
watchfiles                        1.0.3
websockets                        14.1
xformers                          0.0.28.post3
zipp                              3.21.0

To Reproduce
To reproduce the issue, use the following script, which is a modified version of the one in the README.md

from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM

recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
    report_to="none"
)
@rmakarovv rmakarovv added the bug Something isn't working label Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant