Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix docs about uninitialized bytes #5

Merged
merged 1 commit into from
Feb 26, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ See [P1478R1][p1478r1] for more.
- If the alignment of the type being copied is the same as the pointer width, `atomic_load` is possible to produce an assembly roughly equivalent to the case of using volatile read + atomic fence on many platforms. (e.g., [aarch64](https://github.com/taiki-e/atomic-memcpy/blob/HEAD/tests/asm-test/asm/aarch64-unknown-linux-gnu/atomic_memcpy_load_align8), [riscv64](https://github.com/taiki-e/atomic-memcpy/blob/main/tests/asm-test/asm/riscv64gc-unknown-linux-gnu/atomic_memcpy_load_align8). See [`tests/asm-test/asm`][asm-test] directory for more).
- If the alignment of the type being copied is smaller than the pointer width, there will be some performance degradation. However, it is implemented in such a way that it does not cause extreme performance degradation at least on x86_64. (See [the implementation comments of `atomic_load`][implementation] for more.) It is possible that there is still room for improvement, especially on non-x86_64 platforms.
- Optimization for the case where the alignment of the type being copied is larger than the pointer width has not yet been fully investigated. It is possible that there is still room for improvement, especially on 32-bit platforms where `AtomicU64` is available.
- If the type being copied contains uninitialized bytes (e.g., padding), it is incompatible with `-Zmiri-check-number-validity`. This will probably not be resolved until something like `AtomicMaybeUninit` is supported. **Note:** Due to [Miri does not track uninitialized bytes on a per byte basis for partially initialized scalars][rust-lang/rust#69488], Miri may report this case as an access to an uninitialized byte, regardless of whether the uninitialized byte is actually accessed or not.
- If the type being copied contains uninitialized bytes (e.g., padding) [it is undefined behavior because the copy goes through integers][undefined-behavior]. This problem will probably not be resolved until something like `AtomicMaybeUninit` is supported.
Copy link
Owner Author

@taiki-e taiki-e Feb 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose the currently available (sound) workaround is to use inline assembly (#6), but that's hard to write/maintain, and not compatible with Miri (and sanitizers).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah...

The new text LGTM!


## Related Projects

Expand All @@ -28,7 +28,7 @@ See [P1478R1][p1478r1] for more.
[implementation]: https://github.com/taiki-e/atomic-memcpy/blob/570de7be73b3cb086741cc6cff80dea4c706349c/src/lib.rs#L339-L383
[p1478r1]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1478r1.html
[portable-atomic]: https://github.com/taiki-e/portable-atomic
[rust-lang/rust#69488]: https://github.com/rust-lang/rust/issues/69488
[undefined-behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html

## License

Expand Down
4 changes: 4 additions & 0 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ use core::sync::atomic::{self, Ordering};
/// - `src` must be valid for reads.
/// - `src` must be properly aligned.
/// - `src` must go through [`UnsafeCell::get`](core::cell::UnsafeCell::get).
/// - `T` must not contain uninitialized bytes.
/// - There are no concurrent non-atomic write operations.
/// - There are no concurrent atomic write operations of different
/// granularity. The granularity of atomic operations is an implementation
Expand Down Expand Up @@ -126,6 +127,7 @@ pub unsafe fn atomic_load<T>(src: *const T, order: Ordering) -> core::mem::Maybe
/// - `dst` must be [valid] for writes.
/// - `dst` must be properly aligned.
/// - `dst` must go through [`UnsafeCell::get`](core::cell::UnsafeCell::get).
/// - `T` must not contain uninitialized bytes.
/// - There are no concurrent non-atomic operations.
/// - There are no concurrent atomic operations of different
/// granularity. The granularity of atomic operations is an implementation
Expand Down Expand Up @@ -389,6 +391,7 @@ mod imp {
// - `src` is valid for atomic reads.
// - `src` is properly aligned for `T`.
// - `src` go through `UnsafeCell::get`.
// - `T` does not contain uninitialized bytes.
// - there are no concurrent non-atomic write operations.
// - there are no concurrent atomic write operations of different granularity.
// Note that the safety of the code in this function relies on these guarantees,
Expand Down Expand Up @@ -627,6 +630,7 @@ mod imp {
// - `dst` is valid for atomic writes.
// - `dst` is properly aligned for `T`.
// - `dst` go through `UnsafeCell::get`.
// - `T` does not contain uninitialized bytes.
// - there are no concurrent non-atomic operations.
// - there are no concurrent atomic operations of different granularity.
// - if there are concurrent atomic write operations, `T` is valid for all bit patterns.
Expand Down
66 changes: 0 additions & 66 deletions tests/padding.rs

This file was deleted.

71 changes: 0 additions & 71 deletions tests/uninit.rs

This file was deleted.