|
12 | 12 | //! |
13 | 13 | //! ## The need for synchronization |
14 | 14 | //! |
15 | | -//! Conceptually, a Rust program is simply a series of operations which will |
16 | | -//! be executed on a computer. The timeline of events happening in the program |
17 | | -//! is consistent with the order of the operations in the code. |
| 15 | +//! Conceptually, a Rust program is a series of operations which will |
| 16 | +//! be executed on a computer. The timeline of events happening in the |
| 17 | +//! program is consistent with the order of the operations in the code. |
18 | 18 | //! |
19 | | -//! Considering the following code, operating on some global static variables: |
| 19 | +//! Consider the following code, operating on some global static variables: |
20 | 20 | //! |
21 | 21 | //! ```rust |
22 | 22 | //! static mut A: u32 = 0; |
|
35 | 35 | //! } |
36 | 36 | //! ``` |
37 | 37 | //! |
38 | | -//! It appears _as if_ some variables stored in memory are changed, an addition |
39 | | -//! is performed, result is stored in `A` and the variable `C` is modified twice. |
| 38 | +//! It appears as if some variables stored in memory are changed, an addition |
| 39 | +//! is performed, result is stored in `A` and the variable `C` is |
| 40 | +//! modified twice. |
| 41 | +//! |
40 | 42 | //! When only a single thread is involved, the results are as expected: |
41 | 43 | //! the line `7 4 4` gets printed. |
42 | 44 | //! |
|
50 | 52 | //! in a temporary location until it gets printed, with the global variable |
51 | 53 | //! never getting updated. |
52 | 54 | //! |
53 | | -//! - The final result could be determined just by looking at the code at compile time, |
54 | | -//! so [constant folding] might turn the whole block into a simple `println!("7 4 4")`. |
| 55 | +//! - The final result could be determined just by looking at the code |
| 56 | +//! at compile time, so [constant folding] might turn the whole |
| 57 | +//! block into a simple `println!("7 4 4")`. |
55 | 58 | //! |
56 | | -//! The compiler is allowed to perform any combination of these optimizations, as long |
57 | | -//! as the final optimized code, when executed, produces the same results as the one |
58 | | -//! without optimizations. |
| 59 | +//! The compiler is allowed to perform any combination of these |
| 60 | +//! optimizations, as long as the final optimized code, when executed, |
| 61 | +//! produces the same results as the one without optimizations. |
59 | 62 | //! |
60 | | -//! Due to the [concurrency] involved in modern computers, assumptions about |
61 | | -//! the program's execution order are often wrong. Access to global variables |
62 | | -//! can lead to nondeterministic results, **even if** compiler optimizations |
63 | | -//! are disabled, and it is **still possible** to introduce synchronization bugs. |
| 63 | +//! Due to the [concurrency] involved in modern computers, assumptions |
| 64 | +//! about the program's execution order are often wrong. Access to |
| 65 | +//! global variables can lead to nondeterministic results, **even if** |
| 66 | +//! compiler optimizations are disabled, and it is **still possible** |
| 67 | +//! to introduce synchronization bugs. |
64 | 68 | //! |
65 | 69 | //! Note that thanks to Rust's safety guarantees, accessing global (static) |
66 | 70 | //! variables requires `unsafe` code, assuming we don't use any of the |
|
74 | 78 | //! Instructions can execute in a different order from the one we define, due to |
75 | 79 | //! various reasons: |
76 | 80 | //! |
77 | | -//! - **Compiler** reordering instructions: if the compiler can issue an |
| 81 | +//! - The **compiler** reordering instructions: If the compiler can issue an |
78 | 82 | //! instruction at an earlier point, it will try to do so. For example, it |
79 | 83 | //! might hoist memory loads at the top of a code block, so that the CPU can |
80 | 84 | //! start [prefetching] the values from memory. |
|
83 | 87 | //! signal handlers or certain kinds of low-level code. |
84 | 88 | //! Use [compiler fences] to prevent this reordering. |
85 | 89 | //! |
86 | | -//! - **Single processor** executing instructions [out-of-order]: modern CPUs are |
87 | | -//! capable of [superscalar] execution, i.e. multiple instructions might be |
88 | | -//! executing at the same time, even though the machine code describes a |
89 | | -//! sequential process. |
| 90 | +//! - A **single processor** executing instructions [out-of-order]: |
| 91 | +//! Modern CPUs are capable of [superscalar] execution, |
| 92 | +//! i.e. multiple instructions might be executing at the same time, |
| 93 | +//! even though the machine code describes a sequential process. |
90 | 94 | //! |
91 | 95 | //! This kind of reordering is handled transparently by the CPU. |
92 | 96 | //! |
93 | | -//! - **Multiprocessor** system, where multiple hardware threads run at the same time. |
94 | | -//! In multi-threaded scenarios, you can use two kinds of primitives to deal |
95 | | -//! with synchronization: |
96 | | -//! - [memory fences] to ensure memory accesses are made visibile to other |
97 | | -//! CPUs in the right order. |
98 | | -//! - [atomic operations] to ensure simultaneous access to the same memory |
99 | | -//! location doesn't lead to undefined behavior. |
| 97 | +//! - A **multiprocessor** system executing multiple hardware threads |
| 98 | +//! at the same time: In multi-threaded scenarios, you can use two |
| 99 | +//! kinds of primitives to deal with synchronization: |
| 100 | +//! - [memory fences] to ensure memory accesses are made visibile to |
| 101 | +//! other CPUs in the right order. |
| 102 | +//! - [atomic operations] to ensure simultaneous access to the same |
| 103 | +//! memory location doesn't lead to undefined behavior. |
100 | 104 | //! |
101 | 105 | //! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching |
102 | 106 | //! [compiler fences]: crate::sync::atomic::compiler_fence |
|
111 | 115 | //! inconvenient to use, which is why the standard library also exposes some |
112 | 116 | //! higher-level synchronization objects. |
113 | 117 | //! |
114 | | -//! These abstractions can be built out of lower-level primitives. For efficiency, |
115 | | -//! the sync objects in the standard library are usually implemented with help |
116 | | -//! from the operating system's kernel, which is able to reschedule the threads |
117 | | -//! while they are blocked on acquiring a lock. |
| 118 | +//! These abstractions can be built out of lower-level primitives. |
| 119 | +//! For efficiency, the sync objects in the standard library are usually |
| 120 | +//! implemented with help from the operating system's kernel, which is |
| 121 | +//! able to reschedule the threads while they are blocked on acquiring |
| 122 | +//! a lock. |
| 123 | +//! |
| 124 | +//! The following is an overview of the available synchronization |
| 125 | +//! objects: |
| 126 | +//! |
| 127 | +//! - [`Arc`]: Atomically Reference-Counted pointer, which can be used |
| 128 | +//! in multithreaded environments to prolong the lifetime of some |
| 129 | +//! data until all the threads have finished using it. |
| 130 | +//! |
| 131 | +//! - [`Barrier`]: Ensures multiple threads will wait for each other |
| 132 | +//! to reach a point in the program, before continuing execution all |
| 133 | +//! together. |
| 134 | +//! |
| 135 | +//! - [`Condvar`]: Condition Variable, providing the ability to block |
| 136 | +//! a thread while waiting for an event to occur. |
118 | 137 | //! |
119 | | -//! ## Efficiency |
| 138 | +//! - [`mpsc`]: Multi-producer, single-consumer queues, used for |
| 139 | +//! message-based communication. Can provide a lightweight |
| 140 | +//! inter-thread synchronisation mechanism, at the cost of some |
| 141 | +//! extra memory. |
120 | 142 | //! |
121 | | -//! Higher-level synchronization mechanisms are usually heavy-weight. |
122 | | -//! While most atomic operations can execute instantaneously, acquiring a |
123 | | -//! [`Mutex`] can involve blocking until another thread releases it. |
124 | | -//! For [`RwLock`], while any number of readers may acquire it without |
125 | | -//! blocking, each writer will have exclusive access. |
| 143 | +//! - [`Mutex`]: Mutual Exclusion mechanism, which ensures that at |
| 144 | +//! most one thread at a time is able to access some data. |
126 | 145 | //! |
127 | | -//! On the other hand, communication over [channels] can provide a fairly |
128 | | -//! high-level interface without sacrificing performance, at the cost of |
129 | | -//! somewhat more memory. |
| 146 | +//! - [`Once`]: Used for thread-safe, one-time initialization of a |
| 147 | +//! global variable. |
130 | 148 | //! |
131 | | -//! The more synchronization exists between CPUs, the smaller the performance |
132 | | -//! gains from multithreading will be. |
| 149 | +//! - [`RwLock`]: Provides a mutual exclusion mechanism which allows |
| 150 | +//! multiple readers at the same time, while allowing only one |
| 151 | +//! writer at a time. In some cases, this can be more efficient than |
| 152 | +//! a mutex. |
133 | 153 | //! |
| 154 | +//! [`Arc`]: crate::sync::Arc |
| 155 | +//! [`Barrier`]: crate::sync::Barrier |
| 156 | +//! [`Condvar`]: crate::sync::Condvar |
| 157 | +//! [`mpsc`]: crate::sync::mpsc |
134 | 158 | //! [`Mutex`]: crate::sync::Mutex |
| 159 | +//! [`Once`]: crate::sync::Once |
135 | 160 | //! [`RwLock`]: crate::sync::RwLock |
136 | | -//! [channels]: crate::sync::mpsc |
137 | 161 |
|
138 | 162 | #![stable(feature = "rust1", since = "1.0.0")] |
139 | 163 |
|
|
0 commit comments