You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-13Lines changed: 10 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,11 +19,14 @@ Verify | It takes a ML-DSA public key, N (>=0) -bytes message, an optional conte
19
19
20
20
Here I'm maintaining `ml-dsa` as a C++20 header-only `constexpr` library, implementing NIST FIPS 204 ML-DSA, supporting ML-DSA-{44, 65, 87} parameter sets, as defined in table 1 of ML-DSA standard. For more details on using this library, see [below](#usage). It shows following performance characteristics on desktop and server grade CPUs.
21
21
22
-
ML-DSA-65 Algorithm | Time taken on "12th Gen Intel(R) Core(TM) i7-1260P" | Time taken on "Raspberry Pi 4B" | Time taken on "AWS EC2 Instance c8g.large"
23
-
--- | --: | --: | --:
24
-
keygen | 94.4 us | 442.9 us | 143 us
25
-
sign | 115.7 us | 2364.7 us | 427 us
26
-
verify | 98.5 us | 492.1 us | 151 us
22
+
ML-DSA-65 Algorithm | Time taken on "12th Gen Intel(R) Core(TM) i7-1260P" | Time taken on "AWS EC2 Instance c8g.large"
23
+
--- | --: | --:
24
+
keygen | 92.9 us | 126.2 us
25
+
sign | 160.5 us | 231.7 us
26
+
verify | 94.8 us | 134.4 us
27
+
28
+
> [!NOTE]
29
+
> All numbers in the table above represent the median time required to execute a specific algorithm, except for signing. In the case of signing, the number represents the minimum time required to sign a 32B message. To understand why this is done for signing, please refer to [this](#benchmarking) section.
27
30
28
31
> [!NOTE]
29
32
> Find ML-DSA standard @ https://doi.org/10.6028/NIST.FIPS.204, which you should refer to when understanding intricate details of this implementation.
@@ -113,16 +116,10 @@ make perf -j # If you have built google-benchmark library with libPFM supp
113
116
> Ensure you've put all CPU cores on **performance** mode, before running benchmarks, follow guide @ https://github.com/google/benchmark/blob/main/docs/reducing_variance.md.
114
117
115
118
### On 12th Gen Intel(R) Core(TM) i7-1260P
116
-
117
-
Benchmark result in JSON format @ [bench_result_on_Linux_6.11.0-9-generic_x86_64_with_g++_14.json](./bench_result_on_Linux_6.11.0-9-generic_x86_64_with_g++_14.json).
118
-
119
-
### On Raspberry Pi 4B
120
-
121
-
Benchmark result in JSON format @ [bench_result_on_Linux_6.6.51+rpt-rpi-v8_aarch64_with_g++_12.json](./bench_result_on_Linux_6.6.51+rpt-rpi-v8_aarch64_with_g++_12.json).
119
+
Benchmark result in JSON format @ [bench_result_on_Linux_6.11.0-19-generic_x86_64_with_g++_14.json](./bench_result_on_Linux_6.11.0-19-generic_x86_64_with_g++_14.json).
122
120
123
121
### On AWS EC2 Instance `c8g.large` i.e. AWS Graviton4
124
-
125
-
Benchmark result in JSON format @ [bench_result_on_Linux_6.8.0-1016-aws_aarch64_with_g++_13.json](./bench_result_on_Linux_6.8.0-1016-aws_aarch64_with_g++_13.json).
122
+
Benchmark result in JSON format @ [bench_result_on_Linux_6.8.0-1021-aws_aarch64_with_g++_13.json](./bench_result_on_Linux_6.8.0-1021-aws_aarch64_with_g++_13.json).
126
123
127
124
More about this EC2 instance @ https://aws.amazon.com/ec2/instance-types/c8g.
0 commit comments