Skip to content

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Sep 20, 2025

Update ownership for these parts:

@ngxson
Copy link
Collaborator Author

ngxson commented Sep 20, 2025

@CISC I only change the top of the file so it won't conflict with your PR #16124 , we should move /src/llama-chat.* to the same section with other /src/*

@CISC
Copy link
Collaborator

CISC commented Sep 20, 2025

we should move /src/llama-chat.* to the same section with other /src/*

It would be easier if you moved it to the bottom so that they conflict and I can resolve it in github web ui.

@ericcurtin
Copy link
Collaborator

ericcurtin commented Sep 21, 2025

I would like to propose adding my name as a co-codeowner to:

/tools/server/*
/tools/run/* (a co-codeowner here would be good maybe @npopov-vst if he's willing)
/tools/pull/* (if we merge this #16132, a co-codeowner here would be good maybe @ngxson if he's willing)
/common/* (but would also hope others would add themselves here)

not pushing a PR for proposal as I don't want to cause conflicts, etc.

@npopov-vst
Copy link
Contributor

Thanks @ericcurtin. I am not sure if I am ready, since I am just learning this project, but I am willing to try.

@ericcurtin
Copy link
Collaborator

Thanks @ericcurtin. I am not sure if I am ready, since I am just learning this project, but I am willing to try.

You are ready IMO, you've been very helpful, llama-run is one of the smaller tools.

@ggerganov ggerganov merged commit 05a2458 into ggml-org:master Sep 22, 2025
2 of 3 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Sep 23, 2025
* origin/master: (39 commits)
ci : disable AMD workflows + update NVIDIA workflows (ggml-org#16200)
ci : enable Vulkan workflow on Mac (ggml-org#16194)
ggml-cpu: Respect cpumask settings (ggml-org#16164)
ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (ggml-org#15928)
zdnn: refactor codebase + add docs (ggml-org#16178)
codeowners : add @danbev to model-conversion example [no ci] (ggml-org#16190)
devops: add s390x containers (ggml-org#15915)
ggml-cpu : fix typo in gemm comments [no ci] (ggml-org#16189)
feat: Add conversion support in GraniteHybrid for non-hybrid (all attn) (ggml-org#16177)
clang-tidy : disable warning about performance enum size (ggml-org#16127)
ggml : implement set_rows with i32 index (ggml-org#16159)
codeowners : update + cleanup (ggml-org#16174)
common : enable `--offline` mode without curl support (ggml-org#16137)
webui : fix handling incomplete chunks (ggml-org#16107)
embedding : fix typos in README (ggml-org#16171)
common : remove unused local variables (ggml-org#16140)
ggml : extend ggml_can_fuse to work with non-sequential nodes (ggml-org#16123)
ggml : add ggml_op_is_empty (ggml-org#16122)
codeowners : update ownership for @ngxson and @allozuar (ggml-org#16128)
Vulkan: add conv_transpose_2d operation (ggml-org#16022)
...
struct pushed a commit to struct/llama.cpp that referenced this pull request Sep 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants