Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix docs about uninitialized bytes #5

Merged
merged 1 commit into from
Feb 26, 2022
Merged

Fix docs about uninitialized bytes #5

merged 1 commit into from
Feb 26, 2022

Conversation

taiki-e
Copy link
Owner

@taiki-e taiki-e commented Feb 26, 2022

Based on the feedback from @RalfJung.

@taiki-e taiki-e mentioned this pull request Feb 26, 2022
@@ -18,7 +18,7 @@ See [P1478R1][p1478r1] for more.
- If the alignment of the type being copied is the same as the pointer width, `atomic_load` is possible to produce an assembly roughly equivalent to the case of using volatile read + atomic fence on many platforms. (e.g., [aarch64](https://github.com/taiki-e/atomic-memcpy/blob/HEAD/tests/asm-test/asm/aarch64-unknown-linux-gnu/atomic_memcpy_load_align8), [riscv64](https://github.com/taiki-e/atomic-memcpy/blob/main/tests/asm-test/asm/riscv64gc-unknown-linux-gnu/atomic_memcpy_load_align8). See [`tests/asm-test/asm`][asm-test] directory for more).
- If the alignment of the type being copied is smaller than the pointer width, there will be some performance degradation. However, it is implemented in such a way that it does not cause extreme performance degradation at least on x86_64. (See [the implementation comments of `atomic_load`][implementation] for more.) It is possible that there is still room for improvement, especially on non-x86_64 platforms.
- Optimization for the case where the alignment of the type being copied is larger than the pointer width has not yet been fully investigated. It is possible that there is still room for improvement, especially on 32-bit platforms where `AtomicU64` is available.
- If the type being copied contains uninitialized bytes (e.g., padding), it is incompatible with `-Zmiri-check-number-validity`. This will probably not be resolved until something like `AtomicMaybeUninit` is supported. **Note:** Due to [Miri does not track uninitialized bytes on a per byte basis for partially initialized scalars][rust-lang/rust#69488], Miri may report this case as an access to an uninitialized byte, regardless of whether the uninitialized byte is actually accessed or not.
- If the type being copied contains uninitialized bytes (e.g., padding) [it is undefined behavior because the copy goes through integers][undefined-behavior]. This problem will probably not be resolved until something like `AtomicMaybeUninit` is supported.
Copy link
Owner Author

@taiki-e taiki-e Feb 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose the currently available (sound) workaround is to use inline assembly (#6), but that's hard to write/maintain, and not compatible with Miri (and sanitizers).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah...

The new text LGTM!

@taiki-e
Copy link
Owner Author

taiki-e commented Feb 26, 2022

bors r+

@bors
Copy link
Contributor

bors bot commented Feb 26, 2022

Build succeeded:

@bors bors bot merged commit ee987b5 into main Feb 26, 2022
@bors bors bot deleted the doc branch February 26, 2022 09:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants