Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further simplification of the Mbed Crypto provider #187

Closed
hug-dev opened this issue Jun 18, 2020 · 14 comments · Fixed by #191
Closed

Further simplification of the Mbed Crypto provider #187

hug-dev opened this issue Jun 18, 2020 · 14 comments · Fixed by #191
Labels
enhancement New feature or request

Comments

@hug-dev
Copy link
Member

hug-dev commented Jun 18, 2020

In the conversation of #186 :

I am thinking of a few things as next steps to simplify the provider further, I said:

  1. I think it would be possible to remove the local_ids field and replace it with a AtomicU32 counter?
  2. The key_handle_mutex might not be needed anymore if updating Mbed Crypto solved the thread-safety issues. If it is still needed (Parsec stress tests fail without it), it might be better suited in the psa-crypto crate.
  3. I am not sure if the key_slot_semaphore is needed at all. It limits the number of open key slots at a time but if that is a constraint of Mbed Crypto we might just return the error

to which @sbailey-arm replied:

  1. We could do. How would we deal with holes when keys are destroyed? E.g. Allocate keys 1 to 100, then destroy 34, 67, 78 and 91.

  2. This is the issue for that and its still open so I think we'll hit issues. I can try testing it without though, and if it causes a problem, look into moving it to psa-crypto.

  3. stress_test and normal_tests pass after removing the semaphore guard request in each operation method so it looks like we can remove it.

Let's continue the discussion here and see how to address those points.

cc @ionut-arm

@hug-dev hug-dev added the enhancement New feature or request label Jun 18, 2020
@hug-dev
Copy link
Member Author

hug-dev commented Jun 18, 2020

We could do. How would we deal with holes when keys are destroyed? E.g. Allocate keys 1 to 100, then destroy 34, 67, 78 and 91.

You are right that there will be holes but I think the window of values is so big that it should not be a problem when Parsec is used. Maybe only in stress test?

This is the issue for that and its still open so I think we'll hit issues. I can try testing it without though, and if it causes a problem, look into moving it to psa-crypto.

I think it is better placed in psa-crypto. The goal of PSA Crypto in the end is to be totally agnostic of the different implementations, but it looks like now we are tying more and more things to Mbed Crypto. I think that is fine for now.
One thing to note as well is that Mutex won't be in the no-std feature.

stress_test and normal_tests pass after removing the semaphore guard request in each operation method so it looks like we can remove it.

Nice! I think that running out of slots is a valid error to return anyway and not something we should handle in Parsec.

@ionut-arm
Copy link
Member

You are right that there will be holes but I think the window of values is so big that it should not be a problem when Parsec is used. Maybe only in stress test?

If we run a stress test that generates 100 keys per second, we'd have to run it for almost 1 year and a half to run out of handles (if we assume that the whole 32 bit space is available).

@hug-dev
Copy link
Member Author

hug-dev commented Jun 19, 2020

The Mbed Crypto provider should also use parallaxsecond/rust-psa-crypto#30 instead of redefining.

@sbailey-arm
Copy link
Contributor

PSA key ID range is 1 to 1,073,741,823, but even so, maybe I'm over thinking it a bit. If you have that many keys and generated one per second, it would take you 34 years to run out of space 😋.

One thing to note as well is that Mutex won't be in the no-std feature.

Can we do optional types based on features? 'If no_std, use an int (though what if someone's using a dual core microctroller... The cortex_m crate mentions Mutex), otherwise use the standard Mutex'.

@hug-dev
Copy link
Member Author

hug-dev commented Jun 19, 2020

Can we do optional types based on features? 'If no_std, use an int (though what if someone's using a dual core microctroller... The cortex_m crate mentions Mutex), otherwise use the standard Mutex'.

I think the Mutex for the local_ids (that could be replaced by the counter) is different from key_handle_mutex, which ensures thread safety of slot operations. For that last one it could be a good idea to use the no_std Mutex! We could also choose to not implement anything in the no_std case and document it.

@sbailey-arm
Copy link
Contributor

sbailey-arm commented Jun 24, 2020

As stated here, we have a thread safety issue that needs fixing. This is because mbedtls has a bug in it which isn't going to be fixed soon, so psa-crypto needs to cover it. This is point 2 in this issue.

psa-crypto is supposed to be no_std but that introduces a problem, we don't have access to the standard Mutex in the std crate (as @hug-dev pointed out!). There is a bare metal mutex but that is not safe for multi-core systems.

Ignoring no_std, we have regular Mutex but new isn't compatible with a static declaration. Another option is StaticMutex, but that has been marked as unstable.

@hug-dev
Copy link
Member Author

hug-dev commented Jun 24, 2020

new isn't compatible with a static declaration

I forgot about that! We discussed about that and possibly having a PsaCrypto struct which would contain the mutex in the design issue #8.
It really should not be our responsability to fix a bug in Mbed Crypto, specially if the fix is for one single implementation of PSA Crypto, as we aim to be agnostic of the specific implementation. So I would say that we should drop that change for now but still look at 1 and 3 in the first post.

@ionut-arm
Copy link
Member

new isn't compatible with a static declaration

That's very weird, it literally contradicts the documentation:

The mutex can also be statically initialized or created via a new constructor.

The documentation for a (now deprecated) StaticMutex hints at using lazy_static, so that might be what they mean when they say "can be statically initialized"

@sbailey-arm
Copy link
Contributor

new isn't compatible with a static declaration

That's very weird, it literally contradicts the documentation:

The mutex can also be statically initialized or created via a new constructor.

The documentation for a (now deprecated) StaticMutex hints at using lazy_static, so that might be what they mean when they say "can be statically initialized"

Yeah, I tried it and it wouldn't compile.

eventually this type shouldn't be necessary as Mutex::new in a static should suffice

I presume they haven't done this yet!

@sbailey-arm
Copy link
Contributor

For 1., if you're not concerned about creating holes when destroying keys, do we ever need to decrement the counter by more than 1? As an example: we generate 100 keys, then destroy keys 90 to 100 in ascending order (so key 100 is the last to be destroyed), do we reset the counter to 89 as this is now the highest key ID in use, or do we leave it at 100 (or even 99, as it would be easy to check if the current key being destroyed is the highest, if it is we know the next highest key ID has to be no higher than current_highest_ID - 1)? If we get rid of local_ids, resetting the counter to the actual next highest, rather than the safest assumed highest, because more difficult/not possible.

@hug-dev
Copy link
Member Author

hug-dev commented Jun 24, 2020

do we leave it at 100

I think even checking if the key to delete has the highest key ID in use and preventing data races when doing that and deleting the key (preventing other keys to be created with that ID) is quite complicated and would need another mutex.
I would personally think it is not worth it to just save one possible hole and we can just never decrement the counter.

@sbailey-arm
Copy link
Contributor

Ok. What about if create_key_id fails to store the key? Currently, the new ID is only saved if the key with the new ID is saved to the store successfully.

With the AtomicU32, I can increment it atomically, but then it then fails to store the key with this new ID, I can't decrement it because, there's no guarantee that another thread hasn't come in an incremented the counter in the mean time. If another thread did come in, increment the counter and create and store a new key with that ID, then the thread that failed decremented the counter, the next time a key is created, you'll end up with a new ID that is the same as the one generated in the thread that succeeded.

@hug-dev
Copy link
Member Author

hug-dev commented Jun 25, 2020

I can't decrement it because, there's no guarantee that another thread hasn't come in an incremented the counter in the mean time.

Yes, that is a good point. I think it should be fine never to decrement the counter and return safely an error if it is too big. If in the future we run into problems because of this, we can change our system.

@ionut-arm
Copy link
Member

If the number is too big then you can overwrite the atomic with the maximum allowed value. At that point it doesn't matter whether it's MAX or MAX+1 because creations will fail anyway before reaching Mbed Crypto.

It's more important to decide how to get past this problem in a system that actually uses this provider (assuming someone might want to use it in production).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants