|
79 | 79 | //! see each type's documentation, and note that the names of actual methods may
|
80 | 80 | //! differ from the tables below on certain collections.
|
81 | 81 | //!
|
82 |
| -//! Throughout the documentation, we will follow a few conventions. For all |
83 |
| -//! operations, the collection's size is denoted by n. If another collection is |
84 |
| -//! involved in the operation, it contains m elements. Operations which have an |
85 |
| -//! *amortized* cost are suffixed with a `*`. Operations with an *expected* |
86 |
| -//! cost are suffixed with a `~`. |
| 82 | +//! Throughout the documentation, we will adhere to the following conventions |
| 83 | +//! for operation notation: |
87 | 84 | //!
|
88 |
| -//! All amortized costs are for the potential need to resize when capacity is |
89 |
| -//! exhausted. If a resize occurs it will take *O*(*n*) time. Our collections never |
90 |
| -//! automatically shrink, so removal operations aren't amortized. Over a |
91 |
| -//! sufficiently large series of operations, the average cost per operation will |
92 |
| -//! deterministically equal the given cost. |
| 85 | +//! * The collection's size is denoted by `n`. |
| 86 | +//! * If a second collection is involved, its size is denoted by `m`. |
| 87 | +//! * Item indices are denoted by `i`. |
| 88 | +//! * Operations which have an *amortized* cost are suffixed with a `*`. |
| 89 | +//! * Operations with an *expected* cost are suffixed with a `~`. |
93 | 90 | //!
|
94 |
| -//! Only [`HashMap`] has expected costs, due to the probabilistic nature of hashing. |
95 |
| -//! It is theoretically possible, though very unlikely, for [`HashMap`] to |
96 |
| -//! experience worse performance. |
| 91 | +//! Calling operations that add to a collection will occasionally require a |
| 92 | +//! collection to be resized - an extra operation that takes *O*(*n*) time. |
97 | 93 | //!
|
98 |
| -//! ## Sequences |
| 94 | +//! *Amortized* costs are calculated to account for the time cost of such resize |
| 95 | +//! operations *over a sufficiently large series of operations*. An individual |
| 96 | +//! operation may be slower or faster due to the sporadic nature of collection |
| 97 | +//! resizing, however the average cost per operation will approach the amortized |
| 98 | +//! cost. |
99 | 99 | //!
|
100 |
| -//! | | get(i) | insert(i) | remove(i) | append | split_off(i) | |
101 |
| -//! |----------------|------------------------|-------------------------|------------------------|-----------|------------------------| |
102 |
| -//! | [`Vec`] | *O*(1) | *O*(*n*-*i*)* | *O*(*n*-*i*) | *O*(*m*)* | *O*(*n*-*i*) | |
103 |
| -//! | [`VecDeque`] | *O*(1) | *O*(min(*i*, *n*-*i*))* | *O*(min(*i*, *n*-*i*)) | *O*(*m*)* | *O*(min(*i*, *n*-*i*)) | |
104 |
| -//! | [`LinkedList`] | *O*(min(*i*, *n*-*i*)) | *O*(min(*i*, *n*-*i*)) | *O*(min(*i*, *n*-*i*)) | *O*(1) | *O*(min(*i*, *n*-*i*)) | |
| 100 | +//! Rust's collections never automatically shrink, so removal operations aren't |
| 101 | +//! amortized. |
105 | 102 | //!
|
106 |
| -//! Note that where ties occur, [`Vec`] is generally going to be faster than [`VecDeque`], and |
107 |
| -//! [`VecDeque`] is generally going to be faster than [`LinkedList`]. |
| 103 | +//! [`HashMap`] uses *expected* costs. It is theoretically possible, though very |
| 104 | +//! unlikely, for [`HashMap`] to experience significantly worse performance than |
| 105 | +//! the expected cost. This is due to the probabilistic nature of hashing - i.e. |
| 106 | +//! it is possible to generate a duplicate hash given some input key that will |
| 107 | +//! requires extra computation to correct. |
108 | 108 | //!
|
109 |
| -//! ## Maps |
| 109 | +//! ## Cost of Collection Operations |
110 | 110 | //!
|
111 |
| -//! For Sets, all operations have the cost of the equivalent Map operation. |
112 | 111 | //!
|
113 |
| -//! | | get | insert | remove | range | append | |
114 |
| -//! |--------------|---------------|---------------|---------------|---------------|--------------| |
115 |
| -//! | [`HashMap`] | *O*(1)~ | *O*(1)~* | *O*(1)~ | N/A | N/A | |
116 |
| -//! | [`BTreeMap`] | *O*(log(*n*)) | *O*(log(*n*)) | *O*(log(*n*)) | *O*(log(*n*)) | *O*(*n*+*m*) | |
| 112 | +//! | | get(i) | insert(i) | remove(i) | append(Vec(m)) | split_off(i) | range | append | |
| 113 | +//! |----------------|------------------------|-------------------------|------------------------|-------------------|------------------------|-----------------|--------------| |
| 114 | +//! | [`Vec`] | *O*(1) | *O*(*n*-*i*)* | *O*(*n*-*i*) | *O*(*m*)* | *O*(*n*-*i*) | N/A | N/A | |
| 115 | +//! | [`VecDeque`] | *O*(1) | *O*(min(*i*, *n*-*i*))* | *O*(min(*i*, *n*-*i*)) | *O*(*m*)* | *O*(min(*i*, *n*-*i*)) | N/A | N/A | |
| 116 | +//! | [`LinkedList`] | *O*(min(*i*, *n*-*i*)) | *O*(min(*i*, *n*-*i*)) | *O*(min(*i*, *n*-*i*)) | *O*(1) | *O*(min(*i*, *n*-*i*)) | N/A | N/A | |
| 117 | +//! | [`HashMap`] | *O*(1)~ | *O*(1)~* | *O*(1)~ | N/A | N/A | N/A | N/A | |
| 118 | +//! | [`BTreeMap`] | *O*(log(*n*)) | *O*(log(*n*)) | *O*(log(*n*)) | N/A | N/A | *O*(log(*n*)) | *O*(*n*+*m*) | |
| 119 | +//! |
| 120 | +//! Note that where ties occur, [`Vec`] is generally going to be faster than |
| 121 | +//! [`VecDeque`], and [`VecDeque`] is generally going to be faster than |
| 122 | +//! [`LinkedList`]. |
| 123 | +//! |
| 124 | +//! For Sets, all operations have the cost of the equivalent Map operation. |
117 | 125 | //!
|
118 | 126 | //! # Correct and Efficient Usage of Collections
|
119 | 127 | //!
|
|
0 commit comments