diff --git a/AVL Tree/README.markdown b/AVL Tree/README.markdown index a9218203a..c8d8d4628 100644 --- a/AVL Tree/README.markdown +++ b/AVL Tree/README.markdown @@ -53,7 +53,7 @@ For the rotation we're using the terminology: * *RotationSubtree* - subtree of the *Pivot* upon the side of rotation * *OppositeSubtree* - subtree of the *Pivot* opposite the side of rotation -Let take an example of balancing the unbalanced tree using *Right* (clockwise direction) rotation: +Let take an example of balancing the unbalanced tree using *Right* (clockwise direction) rotation: ![Rotation1](Images/RotationStep1.jpg) ![Rotation2](Images/RotationStep2.jpg) ![Rotation3](Images/RotationStep3.jpg) @@ -76,7 +76,7 @@ Insertion never needs more than 2 rotations. Removal might require up to __log(n ## The code -Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes. +Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary%20Search%20Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes. > **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary%20Search%20Tree/). It will make the rest of the AVL tree easier to understand. diff --git a/Bounded Priority Queue/README.markdown b/Bounded Priority Queue/README.markdown index 4e4c89272..8cbaa85b2 100644 --- a/Bounded Priority Queue/README.markdown +++ b/Bounded Priority Queue/README.markdown @@ -26,7 +26,7 @@ Suppose that we wish to insert the element `G` with priority 0.1 into this BPQ. ## Implementation -While a [heap](../Heap/) may be a really simple implementation for a priority queue, a sorted [linked list](../Linked List/) allows for **O(k)** insertion and **O(1)** deletion, where **k** is the bounding number of elements. +While a [heap](../Heap/) may be a really simple implementation for a priority queue, a sorted [linked list](../Linked%20List/) allows for **O(k)** insertion and **O(1)** deletion, where **k** is the bounding number of elements. Here's how you could implement it in Swift: diff --git a/Count Occurrences/README.markdown b/Count Occurrences/README.markdown index 4c65c4219..85b77d2d0 100644 --- a/Count Occurrences/README.markdown +++ b/Count Occurrences/README.markdown @@ -36,7 +36,7 @@ func countOccurrencesOfKey(_ key: Int, inArray a: [Int]) -> Int { } return low } - + func rightBoundary() -> Int { var low = 0 var high = a.count @@ -50,12 +50,12 @@ func countOccurrencesOfKey(_ key: Int, inArray a: [Int]) -> Int { } return low } - + return rightBoundary() - leftBoundary() } ``` -Notice that the helper functions `leftBoundary()` and `rightBoundary()` are very similar to the [binary search](../Binary Search/) algorithm. The big difference is that they don't stop when they find the search key, but keep going. +Notice that the helper functions `leftBoundary()` and `rightBoundary()` are very similar to the [binary search](../Binary%20Search/) algorithm. The big difference is that they don't stop when they find the search key, but keep going. To test this algorithm, copy the code to a playground and then do: diff --git a/Depth-First Search/README.markdown b/Depth-First Search/README.markdown index 9e1a5112d..98a19e0cb 100644 --- a/Depth-First Search/README.markdown +++ b/Depth-First Search/README.markdown @@ -40,7 +40,7 @@ func depthFirstSearch(_ graph: Graph, source: Node) -> [String] { } ``` -Where a [breadth-first search](../Breadth-First Search/) visits all immediate neighbors first, a depth-first search tries to go as deep down the tree or graph as it can. +Where a [breadth-first search](../Breadth-First%20Search/) visits all immediate neighbors first, a depth-first search tries to go as deep down the tree or graph as it can. Put this code in a playground and test it like so: @@ -71,13 +71,13 @@ print(nodesExplored) ``` This will output: `["a", "b", "d", "e", "h", "f", "g", "c"]` - + ## What is DFS good for? Depth-first search can be used to solve many problems, for example: * Finding connected components of a sparse graph -* [Topological sorting](../Topological Sort/) of nodes in a graph +* [Topological sorting](../Topological%20Sort/) of nodes in a graph * Finding bridges of a graph (see: [Bridges](https://en.wikipedia.org/wiki/Bridge_(graph_theory)#Bridge-finding_algorithm)) * And lots of others! diff --git a/Deque/README.markdown b/Deque/README.markdown index 67a734576..fda1de5f8 100644 --- a/Deque/README.markdown +++ b/Deque/README.markdown @@ -9,23 +9,23 @@ Here is a very basic implementation of a deque in Swift: ```swift public struct Deque { private var array = [T]() - + public var isEmpty: Bool { return array.isEmpty } - + public var count: Int { return array.count } - + public mutating func enqueue(_ element: T) { array.append(element) } - + public mutating func enqueueFront(_ element: T) { array.insert(element, atIndex: 0) } - + public mutating func dequeue() -> T? { if isEmpty { return nil @@ -33,7 +33,7 @@ public struct Deque { return array.removeFirst() } } - + public mutating func dequeueBack() -> T? { if isEmpty { return nil @@ -41,11 +41,11 @@ public struct Deque { return array.removeLast() } } - + public func peekFront() -> T? { return array.first } - + public func peekBack() -> T? { return array.last } @@ -73,7 +73,7 @@ deque.dequeue() // 5 This particular implementation of `Deque` is simple but not very efficient. Several operations are **O(n)**, notably `enqueueFront()` and `dequeue()`. I've included it only to show the principle of what a deque does. ## A more efficient version - + The reason that `dequeue()` and `enqueueFront()` are **O(n)** is that they work on the front of the array. If you remove an element at the front of an array, what happens is that all the remaining elements need to be shifted in memory. Let's say the deque's array contains the following items: @@ -92,7 +92,7 @@ Likewise, inserting an element at the front of the array is expensive because it First, the elements `2`, `3`, and `4` are moved up by one position in the computer's memory, and then the new element `5` is inserted at the position where `2` used to be. -Why is this not an issue at for `enqueue()` and `dequeueBack()`? Well, these operations are performed at the end of the array. The way resizable arrays are implemented in Swift is by reserving a certain amount of free space at the back. +Why is this not an issue at for `enqueue()` and `dequeueBack()`? Well, these operations are performed at the end of the array. The way resizable arrays are implemented in Swift is by reserving a certain amount of free space at the back. Our initial array `[ 1, 2, 3, 4]` actually looks like this in memory: @@ -120,26 +120,26 @@ public struct Deque { private var head: Int private var capacity: Int private let originalCapacity:Int - + public init(_ capacity: Int = 10) { self.capacity = max(capacity, 1) originalCapacity = self.capacity array = [T?](repeating: nil, count: capacity) head = capacity } - + public var isEmpty: Bool { return count == 0 } - + public var count: Int { return array.count - head } - + public mutating func enqueue(_ element: T) { array.append(element) } - + public mutating func enqueueFront(_ element: T) { // this is explained below } @@ -155,7 +155,7 @@ public struct Deque { return array.removeLast() } } - + public func peekFront() -> T? { if isEmpty { return nil @@ -163,7 +163,7 @@ public struct Deque { return array[head] } } - + public func peekBack() -> T? { if isEmpty { return nil @@ -176,7 +176,7 @@ public struct Deque { It still largely looks the same -- `enqueue()` and `dequeueBack()` haven't changed -- but there are also a few important differences. The array now stores objects of type `T?` instead of just `T` because we need some way to mark array elements as being empty. -The `init` method allocates a new array that contains a certain number of `nil` values. This is the free room we have to work with at the beginning of the array. By default this creates 10 empty spots. +The `init` method allocates a new array that contains a certain number of `nil` values. This is the free room we have to work with at the beginning of the array. By default this creates 10 empty spots. The `head` variable is the index in the array of the front-most object. Since the queue is currently empty, `head` points at an index beyond the end of the array. @@ -219,7 +219,7 @@ Notice how the array has resized itself. There was no room to add the `1`, so Sw | head -> **Note:** You won't see those empty spots at the back of the array when you `print(deque.array)`. This is because Swift hides them from you. Only the ones at the front of the array show up. +> **Note:** You won't see those empty spots at the back of the array when you `print(deque.array)`. This is because Swift hides them from you. Only the ones at the front of the array show up. The `dequeue()` method does the opposite of `enqueueFront()`, it reads the value at `head`, sets the array element back to `nil`, and then moves `head` one position to the right: @@ -250,7 +250,7 @@ There is one tiny problem... If you enqueue a lot of objects at the front, you'r } ``` -If `head` equals 0, there is no room left at the front. When that happens, we add a whole bunch of new `nil` elements to the array. This is an **O(n)** operation but since this cost gets divided over all the `enqueueFront()`s, each individual call to `enqueueFront()` is still **O(1)** on average. +If `head` equals 0, there is no room left at the front. When that happens, we add a whole bunch of new `nil` elements to the array. This is an **O(n)** operation but since this cost gets divided over all the `enqueueFront()`s, each individual call to `enqueueFront()` is still **O(1)** on average. > **Note:** We also multiply the capacity by 2 each time this happens, so if your queue grows bigger and bigger, the resizing happens less often. This is also what Swift arrays automatically do at the back. @@ -302,7 +302,7 @@ This way we can strike a balance between fast enqueuing and dequeuing at the fro ## See also -Other ways to implement deque are by using a [doubly linked list](../Linked List/), a [circular buffer](../Ring Buffer/), or two [stacks](../Stack/) facing opposite directions. +Other ways to implement deque are by using a [doubly linked list](../Linked%20List/), a [circular buffer](../Ring%20Buffer/), or two [stacks](../Stack/) facing opposite directions. [A fully-featured deque implementation in Swift](https://github.com/lorentey/Deque) diff --git a/Heap Sort/README.markdown b/Heap Sort/README.markdown index 5f047f82b..7fdd8d2ca 100644 --- a/Heap Sort/README.markdown +++ b/Heap Sort/README.markdown @@ -40,7 +40,7 @@ And fix up the heap to make it valid max-heap again: As you can see, the largest items are making their way to the back. We repeat this process until we arrive at the root node and then the whole array is sorted. -> **Note:** This process is very similar to [selection sort](../Selection Sort/), which repeatedly looks for the minimum item in the remainder of the array. Extracting the minimum or maximum value is what heaps are good at. +> **Note:** This process is very similar to [selection sort](../Selection%20Sort/), which repeatedly looks for the minimum item in the remainder of the array. Extracting the minimum or maximum value is what heaps are good at. Performance of heap sort is **O(n lg n)** in best, worst, and average case. Because we modify the array directly, heap sort can be performed in-place. But it is not a stable sort: the relative order of identical elements is not preserved. diff --git a/Huffman Coding/README.markdown b/Huffman Coding/README.markdown index 6b2c0d245..4e6f75514 100644 --- a/Huffman Coding/README.markdown +++ b/Huffman Coding/README.markdown @@ -16,7 +16,7 @@ If you count how often each byte appears, you can clearly see that some bytes oc c: 2 p: 1 r: 2 e: 1 n: 2 i: 1 - + We can assign bit strings to each of these bytes. The more common a byte is, the fewer bits we assign to it. We might get something like this: space: 5 010 u: 1 11001 @@ -30,12 +30,12 @@ We can assign bit strings to each of these bytes. The more common a byte is, the Now if we replace the original bytes with these bit strings, the compressed output becomes: - 101 000 010 111 11001 0011 10001 010 0010 000 1001 11010 101 + 101 000 010 111 11001 0011 10001 010 0010 000 1001 11010 101 s o _ m u c h _ w o r d s - + 010 0010 000 0010 010 111 11011 0110 01111 010 0011 000 111 _ w o w _ m a n y _ c o m - + 11000 1001 01110 101 101 10000 000 0110 0 p r e s s i o n @@ -57,7 +57,7 @@ The edges between the nodes either say "1" or "0". These correspond to the bit-e Compression is then a matter of looping through the input bytes, and for each byte traverse the tree from the root node to that byte's leaf node. Every time we take a left branch, we emit a 1-bit. When we take a right branch, we emit a 0-bit. -For example, to go from the root node to `c`, we go right (`0`), right again (`0`), left (`1`), and left again (`1`). So the Huffman code for `c` is `0011`. +For example, to go from the root node to `c`, we go right (`0`), right again (`0`), left (`1`), and left again (`1`). So the Huffman code for `c` is `0011`. Decompression works in exactly the opposite way. It reads the compressed bits one-by-one and traverses the tree until we get to a leaf node. The value of that leaf node is the uncompressed byte. For example, if the bits are `11010`, we start at the root and go left, left again, right, left, and a final right to end up at `d`. @@ -137,7 +137,7 @@ Here are the definitions we need: ```swift class Huffman { typealias NodeIndex = Int - + struct Node { var count = 0 var index: NodeIndex = -1 @@ -152,7 +152,7 @@ class Huffman { } ``` -The tree structure is stored in the `tree` array and will be made up of `Node` objects. Since this is a [binary tree](../Binary Tree/), each node needs two children, `left` and `right`, and a reference back to its `parent` node. Unlike a typical binary tree, however, these nodes don't to use pointers to refer to each other but simple integer indices in the `tree` array. (We also store the array `index` of the node itself; the reason for this will become clear later.) +The tree structure is stored in the `tree` array and will be made up of `Node` objects. Since this is a [binary tree](../Binary%20Tree/), each node needs two children, `left` and `right`, and a reference back to its `parent` node. Unlike a typical binary tree, however, these nodes don't to use pointers to refer to each other but simple integer indices in the `tree` array. (We also store the array `index` of the node itself; the reason for this will become clear later.) Note that `tree` currently has room for 256 entries. These are for the leaf nodes because there are 256 possible byte values. Of course, not all of those may end up being used, depending on the input data. Later, we'll add more nodes as we build up the actual tree. For the moment there isn't a tree yet, just 256 separate leaf nodes with no connections between them. All the node counts are 0. @@ -183,7 +183,7 @@ Instead, we'll add a method to export the frequency table without all the pieces var byte: UInt8 = 0 var count = 0 } - + func frequencyTable() -> [Freq] { var a = [Freq]() for i in 0..<256 where tree[i].count > 0 { @@ -209,7 +209,7 @@ To build the tree, we do the following: 2. Create a new parent node that links these two nodes together. 3. This repeats over and over until only one node with no parent remains. This becomes the root node of the tree. -This is an ideal place to use a [priority queue](../Priority Queue/). A priority queue is a data structure that is optimized so that finding the minimum value is always very fast. Here, we repeatedly need to find the node with the smallest count. +This is an ideal place to use a [priority queue](../Priority%20Queue/). A priority queue is a data structure that is optimized so that finding the minimum value is always very fast. Here, we repeatedly need to find the node with the smallest count. The function `buildTree()` then becomes: @@ -233,7 +233,7 @@ The function `buildTree()` then becomes: tree[node1.index].parent = parentNode.index // 4 tree[node2.index].parent = parentNode.index - + queue.enqueue(parentNode) // 5 } @@ -286,7 +286,7 @@ Now that we know how to build the compression tree from the frequency table, we } ``` -This first calls `countByteFrequency()` to build the frequency table, then `buildTree()` to put together the compression tree. It also creates a `BitWriter` object for writing individual bits. +This first calls `countByteFrequency()` to build the frequency table, then `buildTree()` to put together the compression tree. It also creates a `BitWriter` object for writing individual bits. Then it loops through the entire input and for each byte calls `traverseTree()`. That method will step through the tree nodes and for each node write a 1 or 0 bit. Finally, we return the `BitWriter`'s data object. @@ -309,7 +309,7 @@ The interesting stuff happens in `traverseTree()`. This is a recursive method: } ``` -When we call this method from `compressData()`, the `nodeIndex` parameter is the array index of the leaf node for the byte that we're about to encode. This method recursively walks the tree from a leaf node up to the root, and then back again. +When we call this method from `compressData()`, the `nodeIndex` parameter is the array index of the leaf node for the byte that we're about to encode. This method recursively walks the tree from a leaf node up to the root, and then back again. As we're going back from the root to the leaf node, we write a 1 bit or a 0 bit for every node we encounter. If a child is the left node, we emit a 1; if it's the right node, we emit a 0. @@ -395,10 +395,10 @@ Here's how you would use the decompression method: ```swift let frequencyTable = huffman1.frequencyTable() - + let huffman2 = Huffman() let decompressedData = huffman2.decompressData(compressedData, frequencyTable: frequencyTable) - + let s2 = String(data: decompressedData, encoding: NSUTF8StringEncoding)! ``` diff --git a/Knuth-Morris-Pratt/README.markdown b/Knuth-Morris-Pratt/README.markdown index f01b87b3d..95fdea9e9 100644 --- a/Knuth-Morris-Pratt/README.markdown +++ b/Knuth-Morris-Pratt/README.markdown @@ -1,9 +1,9 @@ # Knuth-Morris-Pratt String Search -Goal: Write a linear-time string matching algorithm in Swift that returns the indexes of all the occurrencies of a given pattern. - +Goal: Write a linear-time string matching algorithm in Swift that returns the indexes of all the occurrencies of a given pattern. + In other words, we want to implement an `indexesOf(pattern: String)` extension on `String` that returns an array `[Int]` of integers, representing all occurrences' indexes of the search pattern, or `nil` if the pattern could not be found inside the string. - + For example: ```swift @@ -16,7 +16,7 @@ concert.indexesOf(ptnr: "🎻🎷") // Output: [6] The [Knuth-Morris-Pratt algorithm](https://en.wikipedia.org/wiki/Knuth–Morris–Pratt_algorithm) is considered one of the best algorithms for solving the pattern matching problem. Although in practice [Boyer-Moore](../Boyer-Moore/) is usually preferred, the algorithm that we will introduce is simpler, and has the same (linear) running time. -The idea behind the algorithm is not too different from the [naive string search](../Brute-Force String Search/) procedure. As it, Knuth-Morris-Pratt aligns the text with the pattern and goes with character comparisons from left to right. But, instead of making a shift of one character when a mismatch occurs, it uses a more intelligent way to move the pattern along the text. In fact, the algorithm features a pattern pre-processing stage where it acquires all the informations that will make the algorithm skip redundant comparisons, resulting in larger shifts. +The idea behind the algorithm is not too different from the [naive string search](../Brute-Force%20String%20Search/) procedure. As it, Knuth-Morris-Pratt aligns the text with the pattern and goes with character comparisons from left to right. But, instead of making a shift of one character when a mismatch occurs, it uses a more intelligent way to move the pattern along the text. In fact, the algorithm features a pattern pre-processing stage where it acquires all the informations that will make the algorithm skip redundant comparisons, resulting in larger shifts. The pre-processing stage produces an array (called `suffixPrefix` in the code) of integers in which every element `suffixPrefix[i]` records the length of the longest proper suffix of `P[0...i]` (where `P` is the pattern) that matches a prefix of `P`. In other words, `suffixPrefix[i]` is the longest proper substring of `P` that ends at position `i` and that is a prefix of `P`. Just a quick example. Consider `P = "abadfryaabsabadffg"`, then `suffixPrefix[4] = 0`, `suffixPrefix[9] = 2`, `suffixPrefix[14] = 4`. There are different ways to obtain the values of `SuffixPrefix` array. We will use the method based on the [Z-Algorithm](../Z-Algorithm/). This function takes in input the pattern and produces an array of integers. Each element represents the length of the longest substring starting at position `i` of `P` and that matches a prefix of `P`. You can notice that the two arrays are similar, they record the same informations but on the different places. We only have to find a method to map `Z[i]` to `suffixPrefix[j]`. It is not that difficult and this is the code that will do for us: @@ -93,10 +93,10 @@ extension String { ``` Let's make an example reasoning with the code above. Let's consider the string `P = ACTGACTA"`, the consequentially obtained `suffixPrefix` array equal to `[0, 0, 0, 0, 0, 0, 3, 1]`, and the text `T = "GCACTGACTGACTGACTAG"`. The algorithm begins with the text and the pattern aligned like below. We have to compare `T[0]` with `P[0]`. - + 1 0123456789012345678 - text: GCACTGACTGACTGACTAG + text: GCACTGACTGACTGACTAG textIndex: ^ pattern: ACTGACTA patternIndex: ^ @@ -104,54 +104,54 @@ Let's make an example reasoning with the code above. Let's consider the string ` suffixPrefix: 00000031 We have a mismatch and we move on comparing `T[1]` and `P[0]`. We have to check if a pattern occurrence is present but there is not. So, we have to shift the pattern right and by doing so we have to check `suffixPrefix[1 - 1]`. Its value is `0` and we restart by comparing `T[1]` with `P[0]`. Again a mismath occurs, so we go on with `T[2]` and `P[0]`. - + 1 0123456789012345678 text: GCACTGACTGACTGACTAG - textIndex: ^ + textIndex: ^ pattern: ACTGACTA patternIndex: ^ suffixPrefix: 00000031 This time we have a match. And it continues until position `8`. Unfortunately the length of the match is not equal to the pattern length, we cannot report an occurrence. But we are still lucky because we can use the values computed in the `suffixPrefix` array now. In fact, the length of the match is `7`, and if we look at the element `suffixPrefix[7 - 1]` we discover that is `3`. This information tell us that that the prefix of `P` matches the suffix of the susbtring `T[0...8]`. So the `suffixPrefix` array guarantees us that the two substring match and that we do not have to compare their characters, so we can shift right the pattern for more than one character! The comparisons restart from `T[9]` and `P[3]`. - + 1 0123456789012345678 - text: GCACTGACTGACTGACTAG + text: GCACTGACTGACTGACTAG textIndex: ^ pattern: ACTGACTA patternIndex: ^ suffixPrefix: 00000031 They match so we continue the compares until position `13` where a misatch occurs beetwen charcter `G` and `A`. Just like before, we are lucky and we can use the `suffixPrefix` array to shift right the pattern. - + 1 0123456789012345678 - text: GCACTGACTGACTGACTAG + text: GCACTGACTGACTGACTAG textIndex: ^ pattern: ACTGACTA patternIndex: ^ suffixPrefix: 00000031 Again, we have to compare. But this time the comparisons finally take us to an occurrence, at position `17 - 7 = 10`. - + 1 0123456789012345678 - text: GCACTGACTGACTGACTAG + text: GCACTGACTGACTGACTAG textIndex: ^ pattern: ACTGACTA patternIndex: ^ suffixPrefix: 00000031 The algorithm than tries to compare `T[18]` with `P[1]` (because we used the element `suffixPrefix[8 - 1] = 1`) but it fails and at the next iteration it ends its work. - + The pre-processing stage involves only the pattern. The running time of the Z-Algorithm is linear and takes `O(n)`, where `n` is the length of the pattern `P`. After that, the search stage does not "overshoot" the length of the text `T` (call it `m`). It can be be proved that number of comparisons of the search stage is bounded by `2 * m`. The final running time of the Knuth-Morris-Pratt algorithm is `O(n + m)`. > **Note:** To execute the code in the [KnuthMorrisPratt.swift](./KnuthMorrisPratt.swift) you have to copy the [ZAlgorithm.swift](../Z-Algorithm/ZAlgorithm.swift) file contained in the [Z-Algorithm](../Z-Algorithm/) folder. The [KnuthMorrisPratt.playground](./KnuthMorrisPratt.playground) already includes the definition of the `Zeta` function. -Credits: This code is based on the handbook ["Algorithm on String, Trees and Sequences: Computer Science and Computational Biology"](https://books.google.it/books/about/Algorithms_on_Strings_Trees_and_Sequence.html?id=Ofw5w1yuD8kC&redir_esc=y) by Dan Gusfield, Cambridge University Press, 1997. +Credits: This code is based on the handbook ["Algorithm on String, Trees and Sequences: Computer Science and Computational Biology"](https://books.google.it/books/about/Algorithms_on_Strings_Trees_and_Sequence.html?id=Ofw5w1yuD8kC&redir_esc=y) by Dan Gusfield, Cambridge University Press, 1997. *Written for Swift Algorithm Club by Matteo Dunnhofer* diff --git a/Kth Largest Element/README.markdown b/Kth Largest Element/README.markdown index d50d691a1..53bec1c1c 100644 --- a/Kth Largest Element/README.markdown +++ b/Kth Largest Element/README.markdown @@ -44,7 +44,7 @@ Of course, if you were looking for the k-th *smallest* element, you'd use `a[k]` ## A faster solution -There is a clever algorithm that combines the ideas of [binary search](../Binary Search/) and [quicksort](../Quicksort/) to arrive at an **O(n)** solution. +There is a clever algorithm that combines the ideas of [binary search](../Binary%20Search/) and [quicksort](../Quicksort/) to arrive at an **O(n)** solution. Recall that binary search splits the array in half over and over again, to quickly narrow in on the value you're searching for. That's what we'll do here too. @@ -86,7 +86,7 @@ The following function implements these ideas: ```swift public func randomizedSelect(array: [T], order k: Int) -> T { var a = array - + func randomPivot(inout a: [T], _ low: Int, _ high: Int) -> T { let pivotIndex = random(min: low, max: high) swap(&a, pivotIndex, high) @@ -120,7 +120,7 @@ public func randomizedSelect(array: [T], order k: Int) -> T { return a[low] } } - + precondition(a.count > 0) return randomizedSelect(&a, 0, a.count - 1, k) } diff --git a/Minimum Spanning Tree (Unweighted)/README.markdown b/Minimum Spanning Tree (Unweighted)/README.markdown index cfd29c987..23f18c335 100644 --- a/Minimum Spanning Tree (Unweighted)/README.markdown +++ b/Minimum Spanning Tree (Unweighted)/README.markdown @@ -16,7 +16,7 @@ Drawn as a more conventional tree it looks like this: ![An actual tree](Images/Tree.png) -To calculate the minimum spanning tree on an unweighted graph, we can use the [breadth-first search](../Breadth-First Search/) algorithm. Breadth-first search starts at a source node and traverses the graph by exploring the immediate neighbor nodes first, before moving to the next level neighbors. If we tweak this algorithm by selectively removing edges, then it can convert the graph into the minimum spanning tree. +To calculate the minimum spanning tree on an unweighted graph, we can use the [breadth-first search](../Breadth-First%20Search/) algorithm. Breadth-first search starts at a source node and traverses the graph by exploring the immediate neighbor nodes first, before moving to the next level neighbors. If we tweak this algorithm by selectively removing edges, then it can convert the graph into the minimum spanning tree. Let's step through the example. We start with the source node `a`, add it to a queue and mark it as visited. @@ -185,6 +185,6 @@ print(minimumSpanningTree) // [node: a edges: ["b", "h"]] // [node: h edges: ["g", "i"]] ``` -> **Note:** On an unweighed graph, any spanning tree is always a minimal spanning tree. This means you can also use a [depth-first search](../Depth-First Search) to find the minimum spanning tree. +> **Note:** On an unweighed graph, any spanning tree is always a minimal spanning tree. This means you can also use a [depth-first search](../Depth-First%20Search) to find the minimum spanning tree. *Written by [Chris Pilcher](https://github.com/chris-pilcher) and Matthijs Hollemans* diff --git a/Priority Queue/README.markdown b/Priority Queue/README.markdown index 8308ec16f..fbbcaa0f3 100644 --- a/Priority Queue/README.markdown +++ b/Priority Queue/README.markdown @@ -12,7 +12,7 @@ Examples of algorithms that can benefit from a priority queue: - Event-driven simulations. Each event is given a timestamp and you want events to be performed in order of their timestamps. The priority queue makes it easy to find the next event that needs to be simulated. - Dijkstra's algorithm for graph searching uses a priority queue to calculate the minimum cost. -- [Huffman coding](../Huffman Coding/) for data compression. This algorithm builds up a compression tree. It repeatedly needs to find the two nodes with the smallest frequencies that do not have a parent node yet. +- [Huffman coding](../Huffman%20Coding/) for data compression. This algorithm builds up a compression tree. It repeatedly needs to find the two nodes with the smallest frequencies that do not have a parent node yet. - A* pathfinding for artificial intelligence. - Lots of other places! @@ -31,8 +31,8 @@ Common operations on a priority queue: There are different ways to implement priority queues: -- As a [sorted array](../Ordered Array/). The most important item is at the end of the array. Downside: inserting new items is slow because they must be inserted in sorted order. -- As a balanced [binary search tree](../Binary Search Tree/). This is great for making a double-ended priority queue because it implements both "find minimum" and "find maximum" efficiently. +- As a [sorted array](../Ordered%20Array/). The most important item is at the end of the array. Downside: inserting new items is slow because they must be inserted in sorted order. +- As a balanced [binary search tree](../Binary%20Search%20Tree/). This is great for making a double-ended priority queue because it implements both "find minimum" and "find maximum" efficiently. - As a [heap](../Heap/). The heap is a natural data structure for a priority queue. In fact, the two terms are often used as synonyms. A heap is more efficient than a sorted array because a heap only has to be partially sorted. All heap operations are **O(log n)**. Here's a Swift priority queue based on a heap: diff --git a/Quicksort/README.markdown b/Quicksort/README.markdown index d359ecb5c..e0bd8e604 100644 --- a/Quicksort/README.markdown +++ b/Quicksort/README.markdown @@ -14,7 +14,7 @@ func quicksort(_ a: [T]) -> [T] { let less = a.filter { $0 < pivot } let equal = a.filter { $0 == pivot } let greater = a.filter { $0 > pivot } - + return quicksort(less) + equal + quicksort(greater) } ``` @@ -30,7 +30,7 @@ Here's how it works. When given an array, `quicksort()` splits it up into three All the elements less than the pivot go into a new array called `less`. All the elements equal to the pivot go into the `equal` array. And you guessed it, all elements greater than the pivot go into the third array, `greater`. This is why the generic type `T` must be `Comparable`, so we can compare the elements with `<`, `==`, and `>`. -Once we have these three arrays, `quicksort()` recursively sorts the `less` array and the `greater` array, then glues those sorted subarrays back together with the `equal` array to get the final result. +Once we have these three arrays, `quicksort()` recursively sorts the `less` array and the `greater` array, then glues those sorted subarrays back together with the `equal` array to get the final result. ## An example @@ -73,7 +73,7 @@ The `less` array is empty because there was no value smaller than `-1`; the othe That `greater` array was: [ 3, 2, 5 ] - + This works just the same way as before: we pick the middle element `2` as the pivot and fill up the subarrays: less: [ ] @@ -122,7 +122,7 @@ There is no guarantee that partitioning keeps the elements in the same relative [ 3, 0, 5, 2, -1, 1, 8, 8, 14, 26, 10, 27, 9 ] -The only guarantee is that to the left of the pivot are all the smaller elements and to the right are all the larger elements. Because partitioning can change the original order of equal elements, quicksort does not produce a "stable" sort (unlike [merge sort](../Merge Sort/), for example). Most of the time that's not a big deal. +The only guarantee is that to the left of the pivot are all the smaller elements and to the right are all the larger elements. Because partitioning can change the original order of equal elements, quicksort does not produce a "stable" sort (unlike [merge sort](../Merge%20Sort/), for example). Most of the time that's not a big deal. ## Lomuto's partitioning scheme @@ -133,7 +133,7 @@ Here's an implementation of Lomuto's partitioning scheme in Swift: ```swift func partitionLomuto(_ a: inout [T], low: Int, high: Int) -> Int { let pivot = a[high] - + var i = low for j in low..(_ a: inout [T], low: Int, high: Int) -> Int i += 1 } } - + (a[i], a[high]) = (a[high], a[i]) return i } @@ -168,7 +168,7 @@ After partitioning, the array looks like this: The variable `p` contains the return value of the call to `partitionLomuto()` and is 7. This is the index of the pivot element in the new array (marked with a star). -The left partition goes from 0 to `p-1` and is `[ 0, 3, 2, 1, 5, 8, -1 ]`. The right partition goes from `p+1` to the end, and is `[ 9, 10, 14, 26, 27 ]` (the fact that the right partition is already sorted is a coincidence). +The left partition goes from 0 to `p-1` and is `[ 0, 3, 2, 1, 5, 8, -1 ]`. The right partition goes from `p+1` to the end, and is `[ 9, 10, 14, 26, 27 ]` (the fact that the right partition is already sorted is a coincidence). You may notice something interesting... The value `8` occurs more than once in the array. One of those `8`s did not end up neatly in the middle but somewhere in the left partition. That's a small downside of the Lomuto algorithm as it makes quicksort slower if there are a lot of duplicate elements. @@ -281,11 +281,11 @@ func partitionHoare(_ a: inout [T], low: Int, high: Int) -> Int { let pivot = a[low] var i = low - 1 var j = high + 1 - + while true { repeat { j -= 1 } while a[j] > pivot repeat { i += 1 } while a[i] < pivot - + if i < j { swap(&a[i], &a[j]) } else { @@ -309,9 +309,9 @@ The result is: [ -1, 0, 3, 8, 2, 5, 1, 27, 10, 14, 9, 8, 26 ] -Note that this time the pivot isn't in the middle at all. Unlike with Lomuto's scheme, the return value is not necessarily the index of the pivot element in the new array. +Note that this time the pivot isn't in the middle at all. Unlike with Lomuto's scheme, the return value is not necessarily the index of the pivot element in the new array. -Instead, the array is partitioned into the regions `[low...p]` and `[p+1...high]`. Here, the return value `p` is 6, so the two partitions are `[ -1, 0, 3, 8, 2, 5, 1 ]` and `[ 27, 10, 14, 9, 8, 26 ]`. +Instead, the array is partitioned into the regions `[low...p]` and `[p+1...high]`. Here, the return value `p` is 6, so the two partitions are `[ -1, 0, 3, 8, 2, 5, 1 ]` and `[ 27, 10, 14, 9, 8, 26 ]`. The pivot is placed somewhere inside one of the two partitions, but the algorithm doesn't tell you which one or where. If the pivot value occurs more than once, then some instances may appear in the left partition and others may appear in the right partition. @@ -357,7 +357,7 @@ And again: And so on... -That's no good, because this pretty much reduces quicksort to the much slower insertion sort. For quicksort to be efficient, it needs to split the array into roughly two halves. +That's no good, because this pretty much reduces quicksort to the much slower insertion sort. For quicksort to be efficient, it needs to split the array into roughly two halves. The optimal pivot for this example would have been `4`, so we'd get: diff --git a/Segment Tree/README.markdown b/Segment Tree/README.markdown index 15b1a3227..9897c9c3b 100644 --- a/Segment Tree/README.markdown +++ b/Segment Tree/README.markdown @@ -18,7 +18,7 @@ var a = [ 20, 3, -1, 101, 14, 29, 5, 61, 99 ] We want to query this array on the interval from 3 to 7 for the function "sum". That means we do the following: 101 + 14 + 29 + 5 + 61 = 210 - + because `101` is at index 3 in the array and `61` is at index 7. So we pass all the numbers between `101` and `61` to the sum function, which adds them all up. If we had used the "min" function, the result would have been `5` because that's the smallest number in the interval from 3 to 7. Here's naive approach if our array's type is `Int` and **f** is just the sum of two integers: @@ -43,7 +43,7 @@ The main idea of segment trees is simple: we precalculate some segments in our a ## Structure of segment tree -A segment tree is just a [binary tree](../Binary Tree/) where each node is an instance of the `SegmentTree` class: +A segment tree is just a [binary tree](../Binary%20Tree/) where each node is an instance of the `SegmentTree` class: ```swift public class SegmentTree { @@ -116,18 +116,18 @@ Here's the code: if self.leftBound == leftBound && self.rightBound == rightBound { return self.value } - + guard let leftChild = leftChild else { fatalError("leftChild should not be nil") } guard let rightChild = rightChild else { fatalError("rightChild should not be nil") } - + // 2 if leftChild.rightBound < leftBound { return rightChild.query(withLeftBound: leftBound, rightBound: rightBound) - + // 3 } else if rightChild.leftBound > rightBound { return leftChild.query(withLeftBound: leftBound, rightBound: rightBound) - + // 4 } else { let leftResult = leftChild.query(withLeftBound: leftBound, rightBound: leftChild.rightBound) diff --git a/Selection Sort/README.markdown b/Selection Sort/README.markdown index e95749d14..cb8ed59c2 100644 --- a/Selection Sort/README.markdown +++ b/Selection Sort/README.markdown @@ -2,7 +2,7 @@ Goal: Sort an array from low to high (or high to low). -You are given an array of numbers and need to put them in the right order. The selection sort algorithm divides the array into two parts: the beginning of the array is sorted, while the rest of the array consists of the numbers that still remain to be sorted. +You are given an array of numbers and need to put them in the right order. The selection sort algorithm divides the array into two parts: the beginning of the array is sorted, while the rest of the array consists of the numbers that still remain to be sorted. [ ...sorted numbers... | ...unsorted numbers... ] @@ -23,7 +23,7 @@ It's called a "selection" sort, because at every step you search through the res ## An example -Let's say the numbers to sort are `[ 5, 8, 3, 4, 6 ]`. We also keep track of where the sorted portion of the array ends, denoted by the `|` symbol. +Let's say the numbers to sort are `[ 5, 8, 3, 4, 6 ]`. We also keep track of where the sorted portion of the array ends, denoted by the `|` symbol. Initially, the sorted portion is empty: @@ -65,14 +65,14 @@ func selectionSort(_ array: [Int]) -> [Int] { var a = array // 2 for x in 0 ..< a.count - 1 { // 3 - + var lowest = x for y in x + 1 ..< a.count { // 4 if a[y] < a[lowest] { lowest = y } } - + if x != lowest { // 5 swap(&a[x], &a[lowest]) } @@ -108,7 +108,7 @@ The source file [SelectionSort.swift](SelectionSort.swift) has a version of this ## Performance -Selection sort is easy to understand but it performs quite badly, **O(n^2)**. It's worse than [insertion sort](../Insertion%20Sort/) but better than [bubble sort](../Bubble Sort/). The killer is finding the lowest element in the rest of the array. This takes up a lot of time, especially since the inner loop will be performed over and over. +Selection sort is easy to understand but it performs quite badly, **O(n^2)**. It's worse than [insertion sort](../Insertion%20Sort/) but better than [bubble sort](../Bubble%20Sort/). The killer is finding the lowest element in the rest of the array. This takes up a lot of time, especially since the inner loop will be performed over and over. [Heap sort](../Heap%20Sort/) uses the same principle as selection sort but has a really fast method for finding the minimum value in the rest of the array. Its performance is **O(n log n)**.