-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aead: Proposal for revision to API surface for trait AeadInPlace #1672
Comments
I'll reiterate my previous comments that, based on my firsthand experience with APIs like this, I find them to be very clunky and annoying from an API ergonomics perspective:
I would very much consider removing I'm also confused why you're bringing it up again when yesterday you said "clearly it should stay in". I thought we had actually settled this matter? |
I haven't looked deeply into the AEAD APIs for some time, so here some surface-level ideas and preferences which may change depending on discussion. I think that "fundamental" AEAD API should be a detached-style trait on top of Next, we should provide extension traits which implement prepending and appending modes of operation. In other words, even if an algorithm specification states that tag should be prepended, we should allow users to append tags while using it. Depending on whether tag is appended or prepended, the trait methods may provide slightly different APIs (e.g. for appending methods we can support in-place encryption over Finally, about abstracting over prefix versus postfix tags in a high-level API. I think the easiest way to support this will be to provide only slice-to- |
Apologies; if you came away with the understanding that I think
The section "Simplifying wrappers" was a direct response to this section of that comment. Apologies that I hadn't made that clearer. My position is only that it would be ideal for there to be an additional alternative to the APIs that require the caller to use specialist types (e.g. As a practical example, think of a situation where one receives a pair of file descriptors pointing to some mmappable read-only and read-write shared memory, into which some data must be efficiently decrypted. It's simply not an option to change how one allocates their buffer and swap in e.g. an In that situation, it would certainly be preferable to have an alternative available to forego that trouble! I believe some of that confusion may also have been introduced by what I suggested in the prior section, so I wanted to make clear my understanding. As I understand it,
These goals make perfect sense -- both in the current API and any other. My suggestion in the first section (specified below for clarity):
...is only in relation to in-place decryption; if we want to revise the API to be entirely based on just methods (implemented by each algorithm crate) which consume The fact that the suggested in-place decryption API doesn't use I hope that clarification was able to resolve your concerns! If so, what do you think of the conundrum I brought up regarding how best to adapt in-place, tag-prepended encryption to an Do you feel that adapting |
I would really prefer to initially focus exclusively on adapting the detached APIs to use Doing that will also make PRs easier to review, and avoids unnecessarily breaking any existing APIs and code using them.
Use cases like this that need very explicit control of buffering should be using the
Adapting all of the existing code should be straightforward: the impls of Note: we can consider adding new slice-based APIs which abstract over |
This sounds good to me, although I'd probably suggest a marker trait with an associated This could potentially improve the efficiency of the blanket impl of |
Addressing your input, @newpavlov:
I agree -- when you suggested this, I started work revising things to accommodate this. It makes a lot of sense -- implementing every other use case in terms of a wrapper around a method which consumes
I have no personal objection to this idea, but it might be wise to gate the alternative behind an explicit gesture to indicate that it's not the normal use case -- perhaps with something like this: pub trait TagDetachedAead: AeadCore {/* ... */}
pub trait AppendsAuthenticationTag {} // Marker trait
pub trait PrependsAuthenticationTag {} // Marker trait
pub trait TagAppendingAead {/* ... */}
pub trait TagPrependingAead {/* ... */}
impl<Algorithm: TagDetachedAead + AppendsAuthenticationTag> TagAppendingAead for Algorithm {/* ... */}
impl<Algorithm: TagDetachedAead + PrependsAuthenticationTag> TagPrependingAead for Algorithm {/* ... */}
#[repr(transparent)
pub struct TagReversed<Algorithm: TagDetachedAead>(pub Algorithm);
impl<Algorithm: TagDetachedAead + AppendsAuthenticationTag> TagPrependingAead for TagReversed<Algorithm> {/* ... */}
impl<Algorithm: TagDetachedAead + PrependsAuthenticationTag> TagAppendingAead for TagReversed<Algorithm> {/* ... */} (Of course, all above names are placeholders -- I hold no attachment to any of them.) That said, if we want this kind of swappable behavior, I believe we will need to go with the option of adding an optional offset value to |
Yes, I agree. I had in mind the const generics restrictions, which do not apply in this case.
Yes, we may need to introduce something similar to |
I don't think this makes sense. Most AEADs use a postfix mode. Why would we allow users to use a prefix tag with these modes? It's not how they're specified. It introduces needless user choice which allows for non-standard variant of something which is standardized, and won't be compatible with other implementations. |
I think that at the very least we should allow users to use prefix constructions in the postifx mode. I agree that the reverse is less important, but it may make sense from completeness point of view and in some niche applications. Bounding prefix methods on the associated constant is likely to result in a more convoluted API overall. |
Using prefix constructions in a postfix mode would again create non-standard variants of algorithms which aren't compatible with any other implementation. I think we should implement the algorithms as specified and not introduce arbitrary knobs that allow users to create nonstandard variants. I can imagine that leading to a nasty surprise when users attempt to interop only to discover that their messages aren't formatted in the standard way. |
Part of the trouble is that this is perfectly doable for most of the API's use cases, but falls apart for in-place, tag-attached, tag-prepended encryption and decryption, as I specified before (this is the reason I created this issue rather than setting my nose to the grindstone and (eventually) submitting a PR for exactly that). I'll go over the issues I've encountered while updating Problem 1: In-place, tag-attached, tag-prepended operationsWe have a number of constraints for in-place encryption and decryption, all of which need to be met:
Everything works fine in-place while tags are detached because the plaintext is going to be the same length and position as the ciphertext -- we can create an Everything works fine in-place while tags are appended because the plaintext is still going to be the same length and position as the ciphertext . Everything works fine while prepending in-out because the in-buffer and out-buffer will never overlap. But everything breaks down when tags are prepended in-place because the plaintext and ciphertext (i.e. in-buffer and out-buffer) necessarily overlap at an offset. Thus, something needs to give: either we need to revise the API to accommodate our constraints, or we need to implement offset-aware in-place, tag-prepended overrides in crates like That is why I put my thoughts as to various options in the original post -- we don't have to change the surrounding APIs, but we do have to change something if we want to base this API on Problem 2: Algorithm crate compatibility with
|
Supporting this properly would need support in the underlying cipher for encrypting/decrypting data at an offset. There's an issue tracking support for that here: #764 In absence of that, we work around this issue for That isn't ideal since it's incurring an additional copy, but it's the best we can do without underlying support from the cipher itself. In the meantime I would suggest continuing to use the |
Understood. So just to be clear, we'll treat I'm not a complete Rust expert, so please do bear with me in asking this question, as I will admit this solution has grown on me since having posed it: would it not be possible to adapt any algorithm which is implemented in terms of That is to say -- would there be any fundamental problem in adding an If not, I'm more than willing to go the route of simply adapting |
That sounds like a potentially interesting solution if it can be made to work |
Honestly, I am not sure about importance of prefix/postfix mode specified by an AEAD algorithm in practice. In my experience, tag/nonce handling is usually defined by a file format or protocol, not by an AEAD algorithm. Moreover, as discussed above, SIV is a clear outlier in this regard, so I question the effort we spend on ergonomically supporting it. The |
One could say I've been "nerd-sniped" by this problem, so I'll do some experimentation and will most likely submit a PR to the
I agree with this, perhaps slightly differently: I don't believe the algorithm crates should be aware of how their authentication tag is packed, but I do believe that the most interoperable order should be encoded into the API -- to do otherwise would introduce confusion and frustration.
I'm not entirely sure how I feel about it. On the one hand, clearly, it would be ideal to have an optimal implementation for those who do need AES-SIV; on the other, I don't personally want to be spending extended amounts of time digging into an outlier if it can be avoided. I'll do my experimentation with |
The position of the tag is often specified as part of the algorithm, often in the prose description, and if not in the test vectors. Here's one example: RFC8452 describing AES-GCM-SIV:
Especially in cases where the tag position is prescribed by the algorithm specification, I don't think we should deviate. It's only creating interoperability problems that don't need to exist. Postfix tags are effectively "standard" now, so I especially think we shouldn't allow people to use algorithms which are otherwise postfix-tagged in a prefix-tagged mode. |
It describes postfix tags, no? Also, I wouldn't say it's universal, e.g. MGM returns a tuple with ciphertext and tag without specifying whether tag should be appended or prepended. I think we can ignore the test vectors case. It's a bit unfortunate that we need to tweak vectors slightly, but we can live with that. Can you provide an example of practical application which respects this aspect of RFC, i.e. an application which uses postfix tags for, say, GCM and prefix tags for SIV?
I am fine with this, but I think we should provide a way to use prefix constructions in postfix mode. And I don't think we should spend too much effort on designing APIs around the SIV outlier. |
MGM is definitely the odd one out there. Every other AEAD has a canonical prefix or postfix tag encoding. I think we can handle it by bounding blanket impls on a marker trait for the message ordering (e.g.
No, I can't, because AES-SIV is not widely deployed in practical applications. Many protocols, however, are specified in terms of opaque AEAD messages, as the constructions are specified in the RFCs. Many AEAD APIs only work in terms of those messages. When you say "In my experience, tag/nonce handling is usually defined by a file format or protocol, not by an AEAD algorithm" I wonder if perhaps you are confusing tags and nonces. In my experience AEAD messages including the tag are generally handled opaquely, whereas nonces are very much a protocol-level concern.
If you were to use AES-SIV in a postfix mode, it would be violating the message ordering prescribed in RFC5297, which is Note that AES-SIV isn't the only construction with a prefixed tag: XSalsa20Poly1305, now located in the |
Because we can provide more flexible APIs for the postfix mode, while with an "opaque AEAD" trait we inevitable have to use less efficient or significantly more convoluted APIs. We can explicitly state that usage of a "postfix AEAD" trait may result in a non-standard construction and users should be careful with it when aiming for interoperability with other software. |
I think what might make the most sense is a Algorithms which are specified with a non-postfix tag order (AES-SIV, XSalsa20Poly1305) can then provide their own impls of these traits, and algorithms which don't specify an order (MGM) can opt out. |
Migrated from PR #1663 for discussion.
@tarcieri -- following on from where we left off: I've been ruminating on the issue since my last message yesterday.
As @newpavlov proposed, I've been working on modifying the algorithm crates so that we can revise the API surface of
AeadInPlace
(or whatever we might rename or split it out to) to be implemented in terms of methods which consumeInOutBuf
.Fully separating authentication tag placement from the algorithm itself
As you already know, currently, the
AeadInPlace
API is generally implemented in terms ofself.encrypt_in_place_detached
andself.decrypt_in_place_detached
. If we want to change this so that the API is implemented in terms of methods consumingInOutBuf
, then we need to make some surrounding changes to make that work.Implementing in-place, tag-position-agnostic decryption in terms of a call to
self.decrypt_inout_detached
(or whatever we call it) would be very simple: the API could be revised tofn decrypt_in_place<'a>(&self, nonce: &Nonce<Self>, ad: &[u8], buffer: &'a mut [u8]) -> Result<&'a mut [u8], Error>
, very similar to plainfn decrypt
.The provided implementation would split the provided buffer, converting part of it into an
InOutBuf<'_, '_, u8>
and part of it into an&Tag<Self>
in order to pass it toself.decrypt_inout_detached
; it would then return the subslice containing just the decrypted plaintext to the caller (thereby retaining all of the convenience that current API consumers enjoy).The trouble is, in order to implement its counterpart,
encrypt_in_place
, in terms ofencrypt_inout_detached
specifically for tag-prepending AEADs, something needs to give:If we were to just convert a subslice of the buffer into an
InOutBuf
(as above), the caller would need to put their plaintext at an algorithm-dependent offset -- this would be VERY surprising and strange breaking behavior, so it's not an option.If we want to sand down the above rough edge but keep the concept,
InOutBuf
would need to support an offset (so that.get_in()
would return just the plaintext region, and.get_out()
would return just the ciphertext region).If we just throw our hands up and accept the status quo, then the
aes-siv
crate (and any implementation ofAeadInPlace
which prepends its authentication tag) needs to implement BOTH an in-place, authentication-tag-packing-aware encryption method (one which has to read plaintext from the start and write ciphertext at an offset) AND in-out, authentication-tag-packing-unaware encryption method (one which simply reads plaintext from the in-buffer and write ciphertext to the out-buffer at the same position).Honestly, I'm personally of the mind that the last option might be the most sensible one in terms of time spent, but I haven't yet dug deep into the
aes-siv
crate to gauge how much work would be required and whether I'm confident enough to implement the necessary changes optimally. That said, I think that adjustingInOutBuf
to suit this use case might end up being the more flexible, elegant, and above all maintainable option.What do you think? Am I overlooking something?
Simplifying wrappers
On a lesser note, as I was mentioning last night in the above PR, I'd very much like to see the specialist buffer types entirely abstracted away from callers in wrapper methods where possible.
As above, whatever we end up doing to
encrypt_in_place
, it would be ideal if we could also have an in-place encryption method that foregoes specialist types in exchange for slightly more complex semantics (needing to provide at leastAlgorithm::TagSize::to_usize()
additional bytes of unused space).Once we've resolved the conundrum in the section above, so long as adding a separate method is acceptable, it should be a fairly straightforward implementation.
The text was updated successfully, but these errors were encountered: