Skip to content

Commit 0919f4a

Browse files
committed
Auto merge of #27414 - Gankro:tarpl-fixes, r=alexcrichton
This is *mostly* reducing *my* use of *italics* but there's some other misc changes interspersed as I went along. This updates the italicizing alphabetically from `a` to `ra`. r? @steveklabnik
2 parents 7597262 + 554efc0 commit 0919f4a

35 files changed

+237
-229
lines changed

src/doc/tarpl/README.md

+19-24
Original file line numberDiff line numberDiff line change
@@ -2,38 +2,33 @@
22

33
# NOTE: This is a draft document, and may contain serious errors
44

5-
So you've played around with Rust a bit. You've written a few simple programs and
6-
you think you grok the basics. Maybe you've even read through
7-
*[The Rust Programming Language][trpl]*. Now you want to get neck-deep in all the
5+
So you've played around with Rust a bit. You've written a few simple programs
6+
and you think you grok the basics. Maybe you've even read through *[The Rust
7+
Programming Language][trpl]* (TRPL). Now you want to get neck-deep in all the
88
nitty-gritty details of the language. You want to know those weird corner-cases.
9-
You want to know what the heck `unsafe` really means, and how to properly use it.
10-
This is the book for you.
9+
You want to know what the heck `unsafe` really means, and how to properly use
10+
it. This is the book for you.
1111

12-
To be clear, this book goes into *serious* detail. We're going to dig into
12+
To be clear, this book goes into serious detail. We're going to dig into
1313
exception-safety and pointer aliasing. We're going to talk about memory
1414
models. We're even going to do some type-theory. This is stuff that you
15-
absolutely *don't* need to know to write fast and safe Rust programs.
15+
absolutely don't need to know to write fast and safe Rust programs.
1616
You could probably close this book *right now* and still have a productive
1717
and happy career in Rust.
1818

19-
However if you intend to write unsafe code -- or just *really* want to dig into
20-
the guts of the language -- this book contains *invaluable* information.
19+
However if you intend to write unsafe code -- or just really want to dig into
20+
the guts of the language -- this book contains invaluable information.
2121

22-
Unlike *The Rust Programming Language* we *will* be assuming considerable prior
23-
knowledge. In particular, you should be comfortable with:
22+
Unlike TRPL we will be assuming considerable prior knowledge. In particular, you
23+
should be comfortable with basic systems programming and basic Rust. If you
24+
don't feel comfortable with these topics, you should consider [reading
25+
TRPL][trpl], though we will not be assuming that you have. You can skip
26+
straight to this book if you want; just know that we won't be explaining
27+
everything from the ground up.
2428

25-
* Basic Systems Programming:
26-
* Pointers
27-
* [The stack and heap][]
28-
* The memory hierarchy (caches)
29-
* Threads
30-
31-
* [Basic Rust][]
32-
33-
Due to the nature of advanced Rust programming, we will be spending a lot of time
34-
talking about *safety* and *guarantees*. In particular, a significant portion of
35-
the book will be dedicated to correctly writing and understanding Unsafe Rust.
29+
Due to the nature of advanced Rust programming, we will be spending a lot of
30+
time talking about *safety* and *guarantees*. In particular, a significant
31+
portion of the book will be dedicated to correctly writing and understanding
32+
Unsafe Rust.
3633

3734
[trpl]: ../book/
38-
[The stack and heap]: ../book/the-stack-and-the-heap.html
39-
[Basic Rust]: ../book/syntax-and-semantics.html

src/doc/tarpl/SUMMARY.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
* [Ownership](ownership.md)
1111
* [References](references.md)
1212
* [Lifetimes](lifetimes.md)
13-
* [Limits of lifetimes](lifetime-mismatch.md)
13+
* [Limits of Lifetimes](lifetime-mismatch.md)
1414
* [Lifetime Elision](lifetime-elision.md)
1515
* [Unbounded Lifetimes](unbounded-lifetimes.md)
1616
* [Higher-Rank Trait Bounds](hrtb.md)

src/doc/tarpl/atomics.md

+33-28
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ face.
1717
The C11 memory model is fundamentally about trying to bridge the gap between the
1818
semantics we want, the optimizations compilers want, and the inconsistent chaos
1919
our hardware wants. *We* would like to just write programs and have them do
20-
exactly what we said but, you know, *fast*. Wouldn't that be great?
20+
exactly what we said but, you know, fast. Wouldn't that be great?
2121

2222

2323

@@ -35,20 +35,20 @@ y = 3;
3535
x = 2;
3636
```
3737

38-
The compiler may conclude that it would *really* be best if your program did
38+
The compiler may conclude that it would be best if your program did
3939

4040
```rust,ignore
4141
x = 2;
4242
y = 3;
4343
```
4444

45-
This has inverted the order of events *and* completely eliminated one event.
45+
This has inverted the order of events and completely eliminated one event.
4646
From a single-threaded perspective this is completely unobservable: after all
4747
the statements have executed we are in exactly the same state. But if our
48-
program is multi-threaded, we may have been relying on `x` to *actually* be
49-
assigned to 1 before `y` was assigned. We would *really* like the compiler to be
48+
program is multi-threaded, we may have been relying on `x` to actually be
49+
assigned to 1 before `y` was assigned. We would like the compiler to be
5050
able to make these kinds of optimizations, because they can seriously improve
51-
performance. On the other hand, we'd really like to be able to depend on our
51+
performance. On the other hand, we'd also like to be able to depend on our
5252
program *doing the thing we said*.
5353

5454

@@ -57,15 +57,15 @@ program *doing the thing we said*.
5757
# Hardware Reordering
5858

5959
On the other hand, even if the compiler totally understood what we wanted and
60-
respected our wishes, our *hardware* might instead get us in trouble. Trouble
60+
respected our wishes, our hardware might instead get us in trouble. Trouble
6161
comes from CPUs in the form of memory hierarchies. There is indeed a global
6262
shared memory space somewhere in your hardware, but from the perspective of each
6363
CPU core it is *so very far away* and *so very slow*. Each CPU would rather work
64-
with its local cache of the data and only go through all the *anguish* of
65-
talking to shared memory *only* when it doesn't actually have that memory in
64+
with its local cache of the data and only go through all the anguish of
65+
talking to shared memory only when it doesn't actually have that memory in
6666
cache.
6767

68-
After all, that's the whole *point* of the cache, right? If every read from the
68+
After all, that's the whole point of the cache, right? If every read from the
6969
cache had to run back to shared memory to double check that it hadn't changed,
7070
what would the point be? The end result is that the hardware doesn't guarantee
7171
that events that occur in the same order on *one* thread, occur in the same
@@ -99,13 +99,13 @@ provides weak ordering guarantees. This has two consequences for concurrent
9999
programming:
100100

101101
* Asking for stronger guarantees on strongly-ordered hardware may be cheap or
102-
even *free* because they already provide strong guarantees unconditionally.
102+
even free because they already provide strong guarantees unconditionally.
103103
Weaker guarantees may only yield performance wins on weakly-ordered hardware.
104104

105-
* Asking for guarantees that are *too* weak on strongly-ordered hardware is
105+
* Asking for guarantees that are too weak on strongly-ordered hardware is
106106
more likely to *happen* to work, even though your program is strictly
107-
incorrect. If possible, concurrent algorithms should be tested on weakly-
108-
ordered hardware.
107+
incorrect. If possible, concurrent algorithms should be tested on
108+
weakly-ordered hardware.
109109

110110

111111

@@ -115,10 +115,10 @@ programming:
115115

116116
The C11 memory model attempts to bridge the gap by allowing us to talk about the
117117
*causality* of our program. Generally, this is by establishing a *happens
118-
before* relationships between parts of the program and the threads that are
118+
before* relationship between parts of the program and the threads that are
119119
running them. This gives the hardware and compiler room to optimize the program
120120
more aggressively where a strict happens-before relationship isn't established,
121-
but forces them to be more careful where one *is* established. The way we
121+
but forces them to be more careful where one is established. The way we
122122
communicate these relationships are through *data accesses* and *atomic
123123
accesses*.
124124

@@ -130,8 +130,10 @@ propagate the changes made in data accesses to other threads as lazily and
130130
inconsistently as it wants. Mostly critically, data accesses are how data races
131131
happen. Data accesses are very friendly to the hardware and compiler, but as
132132
we've seen they offer *awful* semantics to try to write synchronized code with.
133-
Actually, that's too weak. *It is literally impossible to write correct
134-
synchronized code using only data accesses*.
133+
Actually, that's too weak.
134+
135+
**It is literally impossible to write correct synchronized code using only data
136+
accesses.**
135137

136138
Atomic accesses are how we tell the hardware and compiler that our program is
137139
multi-threaded. Each atomic access can be marked with an *ordering* that
@@ -141,7 +143,10 @@ they *can't* do. For the compiler, this largely revolves around re-ordering of
141143
instructions. For the hardware, this largely revolves around how writes are
142144
propagated to other threads. The set of orderings Rust exposes are:
143145

144-
* Sequentially Consistent (SeqCst) Release Acquire Relaxed
146+
* Sequentially Consistent (SeqCst)
147+
* Release
148+
* Acquire
149+
* Relaxed
145150

146151
(Note: We explicitly do not expose the C11 *consume* ordering)
147152

@@ -154,13 +159,13 @@ synchronize"
154159

155160
Sequentially Consistent is the most powerful of all, implying the restrictions
156161
of all other orderings. Intuitively, a sequentially consistent operation
157-
*cannot* be reordered: all accesses on one thread that happen before and after a
158-
SeqCst access *stay* before and after it. A data-race-free program that uses
162+
cannot be reordered: all accesses on one thread that happen before and after a
163+
SeqCst access stay before and after it. A data-race-free program that uses
159164
only sequentially consistent atomics and data accesses has the very nice
160165
property that there is a single global execution of the program's instructions
161166
that all threads agree on. This execution is also particularly nice to reason
162167
about: it's just an interleaving of each thread's individual executions. This
163-
*does not* hold if you start using the weaker atomic orderings.
168+
does not hold if you start using the weaker atomic orderings.
164169

165170
The relative developer-friendliness of sequential consistency doesn't come for
166171
free. Even on strongly-ordered platforms sequential consistency involves
@@ -170,8 +175,8 @@ In practice, sequential consistency is rarely necessary for program correctness.
170175
However sequential consistency is definitely the right choice if you're not
171176
confident about the other memory orders. Having your program run a bit slower
172177
than it needs to is certainly better than it running incorrectly! It's also
173-
*mechanically* trivial to downgrade atomic operations to have a weaker
174-
consistency later on. Just change `SeqCst` to e.g. `Relaxed` and you're done! Of
178+
mechanically trivial to downgrade atomic operations to have a weaker
179+
consistency later on. Just change `SeqCst` to `Relaxed` and you're done! Of
175180
course, proving that this transformation is *correct* is a whole other matter.
176181

177182

@@ -183,15 +188,15 @@ Acquire and Release are largely intended to be paired. Their names hint at their
183188
use case: they're perfectly suited for acquiring and releasing locks, and
184189
ensuring that critical sections don't overlap.
185190

186-
Intuitively, an acquire access ensures that every access after it *stays* after
191+
Intuitively, an acquire access ensures that every access after it stays after
187192
it. However operations that occur before an acquire are free to be reordered to
188193
occur after it. Similarly, a release access ensures that every access before it
189-
*stays* before it. However operations that occur after a release are free to be
194+
stays before it. However operations that occur after a release are free to be
190195
reordered to occur before it.
191196

192197
When thread A releases a location in memory and then thread B subsequently
193198
acquires *the same* location in memory, causality is established. Every write
194-
that happened *before* A's release will be observed by B *after* its release.
199+
that happened before A's release will be observed by B after its release.
195200
However no causality is established with any other threads. Similarly, no
196201
causality is established if A and B access *different* locations in memory.
197202

@@ -230,7 +235,7 @@ weakly-ordered platforms.
230235
# Relaxed
231236

232237
Relaxed accesses are the absolute weakest. They can be freely re-ordered and
233-
provide no happens-before relationship. Still, relaxed operations *are* still
238+
provide no happens-before relationship. Still, relaxed operations are still
234239
atomic. That is, they don't count as data accesses and any read-modify-write
235240
operations done to them occur atomically. Relaxed operations are appropriate for
236241
things that you definitely want to happen, but don't particularly otherwise care

src/doc/tarpl/borrow-splitting.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The mutual exclusion property of mutable references can be very limiting when
44
working with a composite structure. The borrow checker understands some basic
5-
stuff, but will fall over pretty easily. It *does* understand structs
5+
stuff, but will fall over pretty easily. It does understand structs
66
sufficiently to know that it's possible to borrow disjoint fields of a struct
77
simultaneously. So this works today:
88

@@ -50,7 +50,7 @@ to the same value.
5050

5151
In order to "teach" borrowck that what we're doing is ok, we need to drop down
5252
to unsafe code. For instance, mutable slices expose a `split_at_mut` function
53-
that consumes the slice and returns *two* mutable slices. One for everything to
53+
that consumes the slice and returns two mutable slices. One for everything to
5454
the left of the index, and one for everything to the right. Intuitively we know
5555
this is safe because the slices don't overlap, and therefore alias. However
5656
the implementation requires some unsafety:
@@ -93,10 +93,10 @@ completely incompatible with this API, as it would produce multiple mutable
9393
references to the same object!
9494

9595
However it actually *does* work, exactly because iterators are one-shot objects.
96-
Everything an IterMut yields will be yielded *at most* once, so we don't
97-
*actually* ever yield multiple mutable references to the same piece of data.
96+
Everything an IterMut yields will be yielded at most once, so we don't
97+
actually ever yield multiple mutable references to the same piece of data.
9898

99-
Perhaps surprisingly, mutable iterators *don't* require unsafe code to be
99+
Perhaps surprisingly, mutable iterators don't require unsafe code to be
100100
implemented for many types!
101101

102102
For instance here's a singly linked list:

src/doc/tarpl/casts.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
% Casts
22

33
Casts are a superset of coercions: every coercion can be explicitly
4-
invoked via a cast. However some conversions *require* a cast.
4+
invoked via a cast. However some conversions require a cast.
55
While coercions are pervasive and largely harmless, these "true casts"
66
are rare and potentially dangerous. As such, casts must be explicitly invoked
77
using the `as` keyword: `expr as Type`.
88

99
True casts generally revolve around raw pointers and the primitive numeric
10-
types. Even though they're dangerous, these casts are *infallible* at runtime.
10+
types. Even though they're dangerous, these casts are infallible at runtime.
1111
If a cast triggers some subtle corner case no indication will be given that
1212
this occurred. The cast will simply succeed. That said, casts must be valid
1313
at the type level, or else they will be prevented statically. For instance,

src/doc/tarpl/checked-uninit.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ loop {
8080
// because it relies on actual values.
8181
if true {
8282
// But it does understand that it will only be taken once because
83-
// we *do* unconditionally break out of it. Therefore `x` doesn't
83+
// we unconditionally break out of it. Therefore `x` doesn't
8484
// need to be marked as mutable.
8585
x = 0;
8686
break;

src/doc/tarpl/concurrency.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22

33
Rust as a language doesn't *really* have an opinion on how to do concurrency or
44
parallelism. The standard library exposes OS threads and blocking sys-calls
5-
because *everyone* has those, and they're uniform enough that you can provide
5+
because everyone has those, and they're uniform enough that you can provide
66
an abstraction over them in a relatively uncontroversial way. Message passing,
77
green threads, and async APIs are all diverse enough that any abstraction over
88
them tends to involve trade-offs that we weren't willing to commit to for 1.0.
99

1010
However the way Rust models concurrency makes it relatively easy design your own
11-
concurrency paradigm as a library and have *everyone else's* code Just Work
11+
concurrency paradigm as a library and have everyone else's code Just Work
1212
with yours. Just require the right lifetimes and Send and Sync where appropriate
13-
and you're off to the races. Or rather, off to the... not... having... races.
13+
and you're off to the races. Or rather, off to the... not... having... races.

src/doc/tarpl/constructors.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -37,14 +37,14 @@ blindly memcopied to somewhere else in memory. This means pure on-the-stack-but-
3737
still-movable intrusive linked lists are simply not happening in Rust (safely).
3838

3939
Assignment and copy constructors similarly don't exist because move semantics
40-
are the *only* semantics in Rust. At most `x = y` just moves the bits of y into
41-
the x variable. Rust *does* provide two facilities for providing C++'s copy-
40+
are the only semantics in Rust. At most `x = y` just moves the bits of y into
41+
the x variable. Rust does provide two facilities for providing C++'s copy-
4242
oriented semantics: `Copy` and `Clone`. Clone is our moral equivalent of a copy
4343
constructor, but it's never implicitly invoked. You have to explicitly call
4444
`clone` on an element you want to be cloned. Copy is a special case of Clone
4545
where the implementation is just "copy the bits". Copy types *are* implicitly
4646
cloned whenever they're moved, but because of the definition of Copy this just
47-
means *not* treating the old copy as uninitialized -- a no-op.
47+
means not treating the old copy as uninitialized -- a no-op.
4848

4949
While Rust provides a `Default` trait for specifying the moral equivalent of a
5050
default constructor, it's incredibly rare for this trait to be used. This is

src/doc/tarpl/conversions.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ a different type. Because Rust encourages encoding important properties in the
88
type system, these problems are incredibly pervasive. As such, Rust
99
consequently gives you several ways to solve them.
1010

11-
First we'll look at the ways that *Safe Rust* gives you to reinterpret values.
11+
First we'll look at the ways that Safe Rust gives you to reinterpret values.
1212
The most trivial way to do this is to just destructure a value into its
1313
constituent parts and then build a new type out of them. e.g.
1414

src/doc/tarpl/data.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
% Data Representation in Rust
22

3-
Low-level programming cares a lot about data layout. It's a big deal. It also pervasively
4-
influences the rest of the language, so we're going to start by digging into how data is
5-
represented in Rust.
3+
Low-level programming cares a lot about data layout. It's a big deal. It also
4+
pervasively influences the rest of the language, so we're going to start by
5+
digging into how data is represented in Rust.

0 commit comments

Comments
 (0)