-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
modpow implementation is not constant-time #19
Comments
IIRC RSA blinding (which I believe is used in this implementation) uses random number, so while execution time is still variable, it does not correlate with a protected secret. |
Hi @newpavlov , I don't think that's the case here. The fuzzing target is directly testing this function:
So the problem is going to be in the implementation of I've also got another fuzzing target that does full pkcs1v15 padding with a statically seeded CPRNG. It displays the same variable-time behavior (with a statically seeded deterministic PRNG mind-you), but the first fuzzing target was the more minimal case, so that's what I reported. (sidefuzz also accounts for RNGs by repeated sampling and taking a t-value, but this doesn't apply here) Admittedly, this could also be an artifact of the fuzzer (which would be a bug in the fuzzer), but I don't think that's the case here either. |
thank you for the report. I have not hardned the underlying modpow impl for const timing yet, so I am not surprised. I will look into it as soon as I find the time.
…On 30. Apr 2019, 22:55 +0200, Patrick D Hayes ***@***.***>, wrote:
Hi @newpavlov ,
I don't think that's the case here. The fuzzing target is directly testing this function:
/// Raw RSA encryption of m with the public key. No padding is performed.
#[inline]
pub fn encrypt<K: PublicKey>(key: &K, m: &BigUint) -> BigUint {
m.modpow(key.e(), key.n())
So the problem is going to be in the implementation of modpow, which only takes secret data as input.
I've also got another fuzzing target that does full pkcs1v15 padding with a statically seeded CPRNG. It displays the same variable-time behavior (with a statically seeded PRNG mind-you), but the first fuzzing target was the more minimal case, so that's what I reported.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
For the purposes of Edit: work on |
Edit: opened #373 (and merged!) |
Hi! 👋 I've recently been working on RSA side-channel attacks, and found that many implementations are vulnerable. That's described in the Marvin Attack. Thanks to help by @ueno, who contributed the test harness, I was able to run the test against rust-crypto (exact versions of all packages are in the PR). Unfortunately I have to report that the side-channel leakage from the numerical library is very substantial and easily detectable over the network. As such, I consider RustCrypto RSA to be vulnerable and exploitable. Test results from a run with 100k repeats per probe on an i9-12900KS @ 5.225GHz:
pairwise results are in report.txt confidence intervals for the measurements are as follows: legend:
explanations of that are in the step2.py script in marvin-toolkit repo. In other words, the protection from adding blinding is not sufficient, and Rust Crypto has at least the same issue as the CVE-2022-4304 in OpenSSL. |
@tomato42 indeed that's using PKCS#1v1.5 without random blinding, so with modpow being non-constant-time, it's to be expected. We should definitely get rid of any APIs which permit private key use without random blinding. |
Also just a note for the future, per our SECURITY.md we would've appreciated an advisory for this opened under a private security disclosure. |
@tarcieri If the de-blinding isn't performed using constant-time code, then use of blinding won't remove the side-channel signal, that's what the bug in OpenSSL was about: removal of the blinder and conversion to the constant-length bytes needs to be side-channel free too.
ah, sorry about that; since this was linked as a security issue, I've assumed that the effect of it on security of RSA decryption was assumed public too. |
Our answer until now has been that the application of random blinding prevented such sidechannels, so the fact that isn't the case is news to us |
sorry, I'm confused, on one hand you say that the the |
The README.md is wrong in that case. (Also I'm just discovering this and haven't had time to look over any code yet to confirm specific details) |
ok, I'm still running test with a larger sample, to see if there aren't additional sources of leakage. If you have any additional questions, feel free to ask |
I'd be curious if you saw similar sidechannels in non-PKCS#1v1.5 constructions like OAEP decryption or PSS signatures |
if you could propose a change on top of that PR that adds an OAEP decryption, I can run tests for that too (there are scripts in marvin-toolkit for testing OAEP too) but since the leak is clearly from the numerical library, that means OAEP is also vulnerable, as that leak happens before any OAEP checks even if PSS leaks like that, it doesn't make the implementation as easily exploitable, as both PKCS#1v1.5 an OAEP attacks are chosen ciphertext attacks, with PSS (or with PKCS#1 v1.5 signatures) the attacker can only reasonably affect the hash being signed, not the whole plaintext |
Can you confirm you see the sidechannel against the raw blinded modpow operation (when used with e.g. |
Sorry, I'm not a rust developer, you will need to hand-hold me (provide the complete code to execute) if you want me to run any tests |
The branch containing that code has disappeared: https://github.com/ueno/marvin-toolkit/tree/wip/rust-crypto You can test the low-level "hazmat" API: https://docs.rs/rsa/latest/rsa/hazmat/fn.rsa_decrypt.html It would look something like: ...replaced with: loop {
let mut buffer = vec![0; args.size];
let n = infile.read(&mut buffer)?;
if n == 0 {
break;
}
assert!(n == buffer.len());
let now = Instant::now();
let c = rsa::BigUint::from_bytes_be(&buffer);
let _ = rsa::hazmat::rsa_decrypt(Some(&mut rand_core::OsRng), &privkey, &c);
writeln!(outfile, "{}", now.elapsed().as_nanos())?;
} Note you'll need to add [dependencies]
anyhow = "1"
clap = { version = "4", features = ["derive"] }
rand_core = { version = "0.6", features = ["getrandom"] }
rsa = { version = "0.9", features = ["hazmat"] } |
(will test With 1 million observations per sample (same setup) I'm getting:
and pairwise results: Which show that the errors in padding checks also leak (compare |
ran the tests against this: let c = rsa::BigUint::from_bytes_be(&buffer);
let _ = rsa::hazmat::rsa_decrypt(Some(&mut rand_core::OsRng), &privkey, &c); and have confirmed that it's leaking too. Same overall configuration but with 2.5 million observations per probe:
Since for raw RSA padding doesn't matter I've executed a slightly different set of tests:
and pairwise statistical test results: |
I'm not sure what solution we can provide here short of a fully constant time implementation (which was the planned long-term solution for this problem, using e.g. a Barrett reduction as provided by I guess I need to figure out how to run your reproducer. Can you explain a bit more what those charts are showing? Are you actually able to recover keys? |
what's necessary, is ensuring that the unblinding happens in constant time with respect to the output; that means you need to convert the whatever arbitrary precision format is in use for the modulo exponentiation into constant length integers, then multiply them using real constant-time code, and finally do constant time reduction modulo. Something like what's in https://github.com/tomato42/ctmpi but in rust, not in C. Leakage in conversion from one format to the other is not a problem as that value is uncorrelated with the RSA plaintext, so leakage of it doesn't provide useful information to the attacker (as long as the blinding/unblinding factors remain secret). That being said, if your mod_exp is leaky, you really have to employ both base blinding and exponent blinding. Details of that, as well as links to papers showing attacks against implementations that used just one kind of blinding are in the https://datatracker.ietf.org/doc/html/draft-kario-rsa-guidance-02
they show the estimation of the difference in timing for different inputs. Like in the last one,
those graphs show that it's possible, using time of the decryption, to execute one instance of the Bleichenbacher oracle. To decrypt a RSA ciphertext (or forge a signature) the attacker would need to run the test good few thousand times. See the details on the Marvin vulnerability page and in the associated papers. Since my goal is proving that there is no side-channel, not actually attacking implementations, I don't have good tooling to actually perform the attack (but then there's 25 years of literature on the topic, so once you have a working oracle, there are ready solutions for the rest). |
Another run for
and pairwise statistical results: report.txt Which suggests that for the leakage to happen, the whole most significant word needs to be equal 0. This will make attacks against OAEP with 2048 bit keys rather impractical against 64 bit architectures. But it does mean that both 32 bit architectures, and less common key sizes, like 2049 or 2056 bit keys, are rather realistic to attack. Note: I haven't excluded the possibility of leakage happening with probe 3 or probe 4 (16 and 32 most significant bits being zero respectively), but it does look exactly the same as the leakage pattern for CVE-2022-4304, where I did do that. |
The core That alone is insufficient, however, per the OP. I'd generally worry that I could potentially attempt to completely sidestep |
Hello. I would like to code an exploit for a responsible vulnerability disclosure. I am a beginner in crypto. How much noise time is acceptable in miliseconds to have a reliable exploit please? If you want I can provide the exploit to the project. |
The rsa crate/RUSTSEC-2023-0071 was mentioned on a Debian mailing list: https://lists.debian.org/debian-rust/2024/08/msg00017.html This bug is marked as "release critical" in Debian terms and stopping the crate from migrating to testing (and all crates that depend on it, directly or transitively): |
@gogo2464 the Marvin toolkit already contains the necessary code to exploit the vulnerability if you'd like to experiment with that. As it were, it would be quite helpful if we could find some way to run it in CI, if only on a scheduled basis. |
Regarding the Debian announcement, it would be good if we could move #394 forward /cc @dignifiedquire |
@tarcieri thanks ! I am going to dig! |
@tarcieri I am actively digging!!!! I used the marvin toolkit. I have the data. But I need a tool to recover message from the identified vulnerability from the previously extracted data. Do you know a such tool please? |
I don't offhand, although I believe it's part of the toolkit. @gogo2464 it would probably help if you asked these questions on Zulip so we can keep this already quite busy issue on topic for resolving the core issue: https://rustcrypto.zulipchat.com/#narrow/stream/260047-RSA |
No, the toolkit doesn't have the exploit as part of it, there are multiple reasons for it, but the main one is that it's hard to create one that's universal. In practice, Marvin toolkit is able to run a single instance of the Bleichenbacher oracle, you need to create a separate piece of code that will call this oracle. the Marvin paper does give a high level overview, for details you'll need to look at the past papers
I'm not exactly sure what you're asking here... a reliable exploit requires you to be able to run statistical tests that give highly significant results (p-value below 1e-6 at least) for values we're interested in (i.e. ciphertexts that decrypt to plaintexts starting with specific bytes) or ones that consistently rejects values we're not interested in (i.e. ciphertexts that decrypt to plaintexts that don't start with specific bytes). That's a product of everything: size of the sample, size of the side-channel, size of the noise, nature of the noise (is it changing in time?), amount of irrelevant code measured together with the leaky code, resolution, precision and accuracy of the time source used... just as an example of the most significant parts of that. |
@tarcieri |
The PR to address it is still open: #394 |
Hi there,
I'm the author of sidefuzz (https://github.com/phayes/sidefuzz) and I have found what appears to be variable-time behavior in the
rsa::internals::encrypt()
function. Specifically,rsa::internals::encrypt()
appears to be variable-time in relation to the message. Note that I haven't worked this up into an actual exploit, but merely demonstrated that this function isn't constant-time in relation to the message inputed.Specifically, the message
20d90c8af42aac9b1ee53dc9a0187201
takes 549894 instructions to encrypt, while the message5a28cec68d47f6fe3b1df54c9f320f6d
takes 552427 instruction to encrypt. This is a difference of 2533 instructions, or about 0.5%. So it's a slight difference, but probably exploitable with sufficient sampling.I have crated a fuzzing targets here: https://github.com/phayes/sidefuzz-targets
You can confirm this difference with the sidefuzz tool like so:
My first suspicion was that this was due to
num_bigint_dig::BigUint::from_bytes_be()
being variable-time, but fuzzing that function specifically results in what appears to be constant-time behavior. So I'm not actually sure where the problem is.The text was updated successfully, but these errors were encountered: