Skip to content

Actions: nihui/ncnn

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
1,841 workflow runs
1,841 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

fix
code-format #4559: Commit c49bb4f pushed by nihui
November 7, 2024 11:12 1m 47s gemm-quantize-x86
November 7, 2024 11:12 1m 47s
w
code-format #4558: Commit 2c4bf75 pushed by nihui
November 7, 2024 09:30 1m 54s gemm-quantize-x86
November 7, 2024 09:30 1m 54s
Update pnnx.yml
code-format #4557: Commit f7647af pushed by nihui
November 6, 2024 07:18 1m 44s pnnx-expr-implicit-int-conversion
November 6, 2024 07:18 1m 44s
w
code-format #4554: Commit 502d58b pushed by nihui
November 4, 2024 08:45 1m 43s gemm-quantize-x86
November 4, 2024 08:45 1m 43s
Merge branch 'Tencent:master' into gemm-quantize-x86
code-format #4553: Commit 952bedd pushed by nihui
October 29, 2024 07:09 1m 41s gemm-quantize-x86
October 29, 2024 07:09 1m 41s
disable x86 auto recip optimization for potential precision loss (#5762)
code-format #4552: Commit c32442a pushed by nihui
October 29, 2024 07:07 1m 43s master
October 29, 2024 07:07 1m 43s
disable x86 auto recip optimization for potential precision loss
code-format #4551: Commit fee708c pushed by nihui
October 29, 2024 06:22 1m 51s x86-no-recip
October 29, 2024 06:22 1m 51s
pnnx do not fold tensor with dynamic shape, use fp32 module by defaul…
code-format #4550: Commit 6077adc pushed by nihui
October 29, 2024 03:18 1m 36s x86-no-recip
October 29, 2024 03:18 1m 36s
x86 sse2/xop/avx/avx2 optimization for gemm int8
code-format #4549: Commit 8076f4e pushed by nihui
October 28, 2024 06:46 1m 48s gemm-quantize-x86
October 28, 2024 06:46 1m 48s
pnnx do not fold tensor with dynamic shape, use fp32 module by defaul…
code-format #4548: Commit 6077adc pushed by nihui
October 28, 2024 06:43 1m 49s gemm-quantize-x86
October 28, 2024 06:43 1m 49s
pnnx do not fold tensor with dynamic shape, use fp32 module by defaul…
code-format #4547: Commit 6077adc pushed by nihui
October 28, 2024 06:43 1m 43s master
October 28, 2024 06:43 1m 43s
Merge branch 'master' into lunarlake-0
code-format #4546: Commit ded280e pushed by nihui
October 25, 2024 03:02 1m 38s lunarlake-0
October 25, 2024 03:02 1m 38s
fix gemm arm int8 scales descales offset (#5750)
code-format #4544: Commit e7602a2 pushed by nihui
October 22, 2024 08:00 1m 41s pnnx-dynamic-unfoldable
October 22, 2024 08:00 1m 41s
fix gemm arm int8 scales descales offset (#5750)
code-format #4543: Commit e7602a2 pushed by nihui
October 22, 2024 07:59 1m 46s master
October 22, 2024 07:59 1m 46s
fix++
code-format #4542: Commit 338a5f9 pushed by nihui
October 21, 2024 09:03 1m 44s gemm-quantize-r2
October 21, 2024 09:03 1m 44s
fix gemm arm int8 scales descales offset
code-format #4541: Commit 302be4b pushed by nihui
October 21, 2024 08:32 1m 54s gemm-quantize-r2
October 21, 2024 08:32 1m 54s
arm neon optimization for layernorm fp32/bf16s/fp16s (#5746)
code-format #4540: Commit 8fe6281 pushed by nihui
October 21, 2024 08:30 1m 28s gemm-quantize-r2
October 21, 2024 08:30 1m 28s
Update pnnx.yml
code-format #4539: Commit 42076e7 pushed by nihui
October 21, 2024 07:45 1m 47s pnnx-torch-2.5
October 21, 2024 07:45 1m 47s
Update pnnx.yml
code-format #4538: Commit 22c73a8 pushed by nihui
October 21, 2024 07:06 1m 40s pnnx-torch-2.5
October 21, 2024 07:06 1m 40s
Update pnnx.yml
code-format #4537: Commit 2e42bca pushed by nihui
October 21, 2024 06:40 1m 46s pnnx-torch-2.5
October 21, 2024 06:40 1m 46s
avx vnni int8, avx vnni int16, avx ne convert infrastructure
code-format #4536: Commit 40722c2 pushed by nihui
October 18, 2024 14:18 1m 40s lunarlake-0
October 18, 2024 14:18 1m 40s
arm neon optimization for layernorm fp32/bf16s/fp16s (#5746)
code-format #4535: Commit 8fe6281 pushed by nihui
October 18, 2024 13:38 1m 48s lunarlake-0
October 18, 2024 13:38 1m 48s