Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue templates: Separate bug and enhancement template + no default title #3748

Merged
merged 1 commit into from
Oct 23, 2023

Conversation

monatis
Copy link
Collaborator

@monatis monatis commented Oct 23, 2023

Having a default title lets users simply skip it as in #3744 and many others. Also, we can have separate templates for reporting bugs and requesting enhancements. I hope this can help with better issue management.

@ggerganov ggerganov merged commit 9d02956 into master Oct 23, 2023
7 checks passed
@staviq
Copy link
Contributor

staviq commented Oct 27, 2023

I'm not sure if that was entirely intentional, but every new bug report now gets bug label automatically.

Previously, that label was a good way to mark actual bugs in contrast to people simply doing something wrong.
If somebody managed to reproduce and confirm the bug, but was not familiar with the specific part of the code and unable to take care of it, bug label was a good way to save time for other people going through issue reports.

Now, even when people ask questions, or are having very obvious problems, it gets marked as a bug

So imho, the bug label has lost it's meaning.

I think it would be a much better idea, to have something like bug-unconfirmed label, with not so aggressive color, and use that, so the bug label can be used to differentiate actual bugs.

mattgauf added a commit to mattgauf/llama.cpp that referenced this pull request Oct 27, 2023
* master: (350 commits)
  speculative : ensure draft and target model vocab matches (ggerganov#3812)
  llama : correctly report GGUFv3 format (ggerganov#3818)
  simple : fix batch handling (ggerganov#3803)
  cuda : improve text-generation and batched decoding performance (ggerganov#3776)
  server : do not release slot on image input (ggerganov#3798)
  batched-bench : print params at start
  log : disable pid in log filenames
  server : add parameter -tb N, --threads-batch N (ggerganov#3584) (ggerganov#3768)
  server : do not block system prompt update (ggerganov#3767)
  sync : ggml (conv ops + cuda MSVC fixes) (ggerganov#3765)
  cmake : add missed dependencies (ggerganov#3763)
  cuda : add batched cuBLAS GEMM for faster attention (ggerganov#3749)
  Add more tokenizer tests (ggerganov#3742)
  metal : handle ggml_scale for n%4 != 0 (close ggerganov#3754)
  Revert "make : add optional CUDA_NATIVE_ARCH (ggerganov#2482)"
  issues : separate bug and enhancement template + no default title (ggerganov#3748)
  Update special token handling in conversion scripts for gpt2 derived tokenizers (ggerganov#3746)
  llama : remove token functions with `context` args in favor of `model` (ggerganov#3720)
  Fix baichuan convert script not detecing model (ggerganov#3739)
  make : add optional CUDA_NATIVE_ARCH (ggerganov#2482)
  ...
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Oct 28, 2023
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Oct 28, 2023
olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants