Skip to content

Commit

Permalink
fix: broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
weihanglo committed Apr 4, 2017
1 parent 6f2cdf4 commit 3f7aaa2
Show file tree
Hide file tree
Showing 14 changed files with 92 additions and 92 deletions.
4 changes: 2 additions & 2 deletions AVL Tree/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ For the rotation we're using the terminology:
* *RotationSubtree* - subtree of the *Pivot* upon the side of rotation
* *OppositeSubtree* - subtree of the *Pivot* opposite the side of rotation

Let take an example of balancing the unbalanced tree using *Right* (clockwise direction) rotation:
Let take an example of balancing the unbalanced tree using *Right* (clockwise direction) rotation:

![Rotation1](Images/RotationStep1.jpg) ![Rotation2](Images/RotationStep2.jpg) ![Rotation3](Images/RotationStep3.jpg)

Expand All @@ -76,7 +76,7 @@ Insertion never needs more than 2 rotations. Removal might require up to __log(n

## The code

Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary%20Search%20Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.

> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary%20Search%20Tree/). It will make the rest of the AVL tree easier to understand.
Expand Down
2 changes: 1 addition & 1 deletion Bounded Priority Queue/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Suppose that we wish to insert the element `G` with priority 0.1 into this BPQ.

## Implementation

While a [heap](../Heap/) may be a really simple implementation for a priority queue, a sorted [linked list](../Linked List/) allows for **O(k)** insertion and **O(1)** deletion, where **k** is the bounding number of elements.
While a [heap](../Heap/) may be a really simple implementation for a priority queue, a sorted [linked list](../Linked%20List/) allows for **O(k)** insertion and **O(1)** deletion, where **k** is the bounding number of elements.

Here's how you could implement it in Swift:

Expand Down
6 changes: 3 additions & 3 deletions Count Occurrences/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ func countOccurrencesOfKey(_ key: Int, inArray a: [Int]) -> Int {
}
return low
}

func rightBoundary() -> Int {
var low = 0
var high = a.count
Expand All @@ -50,12 +50,12 @@ func countOccurrencesOfKey(_ key: Int, inArray a: [Int]) -> Int {
}
return low
}

return rightBoundary() - leftBoundary()
}
```

Notice that the helper functions `leftBoundary()` and `rightBoundary()` are very similar to the [binary search](../Binary Search/) algorithm. The big difference is that they don't stop when they find the search key, but keep going.
Notice that the helper functions `leftBoundary()` and `rightBoundary()` are very similar to the [binary search](../Binary%20Search/) algorithm. The big difference is that they don't stop when they find the search key, but keep going.

To test this algorithm, copy the code to a playground and then do:

Expand Down
6 changes: 3 additions & 3 deletions Depth-First Search/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ func depthFirstSearch(_ graph: Graph, source: Node) -> [String] {
}
```

Where a [breadth-first search](../Breadth-First Search/) visits all immediate neighbors first, a depth-first search tries to go as deep down the tree or graph as it can.
Where a [breadth-first search](../Breadth-First%20Search/) visits all immediate neighbors first, a depth-first search tries to go as deep down the tree or graph as it can.

Put this code in a playground and test it like so:

Expand Down Expand Up @@ -71,13 +71,13 @@ print(nodesExplored)
```

This will output: `["a", "b", "d", "e", "h", "f", "g", "c"]`

## What is DFS good for?

Depth-first search can be used to solve many problems, for example:

* Finding connected components of a sparse graph
* [Topological sorting](../Topological Sort/) of nodes in a graph
* [Topological sorting](../Topological%20Sort/) of nodes in a graph
* Finding bridges of a graph (see: [Bridges](https://en.wikipedia.org/wiki/Bridge_(graph_theory)#Bridge-finding_algorithm))
* And lots of others!

Expand Down
42 changes: 21 additions & 21 deletions Deque/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -9,43 +9,43 @@ Here is a very basic implementation of a deque in Swift:
```swift
public struct Deque<T> {
private var array = [T]()

public var isEmpty: Bool {
return array.isEmpty
}

public var count: Int {
return array.count
}

public mutating func enqueue(_ element: T) {
array.append(element)
}

public mutating func enqueueFront(_ element: T) {
array.insert(element, atIndex: 0)
}

public mutating func dequeue() -> T? {
if isEmpty {
return nil
} else {
return array.removeFirst()
}
}

public mutating func dequeueBack() -> T? {
if isEmpty {
return nil
} else {
return array.removeLast()
}
}

public func peekFront() -> T? {
return array.first
}

public func peekBack() -> T? {
return array.last
}
Expand Down Expand Up @@ -73,7 +73,7 @@ deque.dequeue() // 5
This particular implementation of `Deque` is simple but not very efficient. Several operations are **O(n)**, notably `enqueueFront()` and `dequeue()`. I've included it only to show the principle of what a deque does.

## A more efficient version

The reason that `dequeue()` and `enqueueFront()` are **O(n)** is that they work on the front of the array. If you remove an element at the front of an array, what happens is that all the remaining elements need to be shifted in memory.

Let's say the deque's array contains the following items:
Expand All @@ -92,7 +92,7 @@ Likewise, inserting an element at the front of the array is expensive because it

First, the elements `2`, `3`, and `4` are moved up by one position in the computer's memory, and then the new element `5` is inserted at the position where `2` used to be.

Why is this not an issue at for `enqueue()` and `dequeueBack()`? Well, these operations are performed at the end of the array. The way resizable arrays are implemented in Swift is by reserving a certain amount of free space at the back.
Why is this not an issue at for `enqueue()` and `dequeueBack()`? Well, these operations are performed at the end of the array. The way resizable arrays are implemented in Swift is by reserving a certain amount of free space at the back.

Our initial array `[ 1, 2, 3, 4]` actually looks like this in memory:

Expand Down Expand Up @@ -120,26 +120,26 @@ public struct Deque<T> {
private var head: Int
private var capacity: Int
private let originalCapacity:Int

public init(_ capacity: Int = 10) {
self.capacity = max(capacity, 1)
originalCapacity = self.capacity
array = [T?](repeating: nil, count: capacity)
head = capacity
}

public var isEmpty: Bool {
return count == 0
}

public var count: Int {
return array.count - head
}

public mutating func enqueue(_ element: T) {
array.append(element)
}

public mutating func enqueueFront(_ element: T) {
// this is explained below
}
Expand All @@ -155,15 +155,15 @@ public struct Deque<T> {
return array.removeLast()
}
}

public func peekFront() -> T? {
if isEmpty {
return nil
} else {
return array[head]
}
}

public func peekBack() -> T? {
if isEmpty {
return nil
Expand All @@ -176,7 +176,7 @@ public struct Deque<T> {

It still largely looks the same -- `enqueue()` and `dequeueBack()` haven't changed -- but there are also a few important differences. The array now stores objects of type `T?` instead of just `T` because we need some way to mark array elements as being empty.

The `init` method allocates a new array that contains a certain number of `nil` values. This is the free room we have to work with at the beginning of the array. By default this creates 10 empty spots.
The `init` method allocates a new array that contains a certain number of `nil` values. This is the free room we have to work with at the beginning of the array. By default this creates 10 empty spots.

The `head` variable is the index in the array of the front-most object. Since the queue is currently empty, `head` points at an index beyond the end of the array.

Expand Down Expand Up @@ -219,7 +219,7 @@ Notice how the array has resized itself. There was no room to add the `1`, so Sw
|
head

> **Note:** You won't see those empty spots at the back of the array when you `print(deque.array)`. This is because Swift hides them from you. Only the ones at the front of the array show up.
> **Note:** You won't see those empty spots at the back of the array when you `print(deque.array)`. This is because Swift hides them from you. Only the ones at the front of the array show up.
The `dequeue()` method does the opposite of `enqueueFront()`, it reads the value at `head`, sets the array element back to `nil`, and then moves `head` one position to the right:

Expand Down Expand Up @@ -250,7 +250,7 @@ There is one tiny problem... If you enqueue a lot of objects at the front, you'r
}
```

If `head` equals 0, there is no room left at the front. When that happens, we add a whole bunch of new `nil` elements to the array. This is an **O(n)** operation but since this cost gets divided over all the `enqueueFront()`s, each individual call to `enqueueFront()` is still **O(1)** on average.
If `head` equals 0, there is no room left at the front. When that happens, we add a whole bunch of new `nil` elements to the array. This is an **O(n)** operation but since this cost gets divided over all the `enqueueFront()`s, each individual call to `enqueueFront()` is still **O(1)** on average.

> **Note:** We also multiply the capacity by 2 each time this happens, so if your queue grows bigger and bigger, the resizing happens less often. This is also what Swift arrays automatically do at the back.
Expand Down Expand Up @@ -302,7 +302,7 @@ This way we can strike a balance between fast enqueuing and dequeuing at the fro
## See also

Other ways to implement deque are by using a [doubly linked list](../Linked List/), a [circular buffer](../Ring Buffer/), or two [stacks](../Stack/) facing opposite directions.
Other ways to implement deque are by using a [doubly linked list](../Linked%20List/), a [circular buffer](../Ring%20Buffer/), or two [stacks](../Stack/) facing opposite directions.

[A fully-featured deque implementation in Swift](https://github.com/lorentey/Deque)

Expand Down
2 changes: 1 addition & 1 deletion Heap Sort/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ And fix up the heap to make it valid max-heap again:

As you can see, the largest items are making their way to the back. We repeat this process until we arrive at the root node and then the whole array is sorted.

> **Note:** This process is very similar to [selection sort](../Selection Sort/), which repeatedly looks for the minimum item in the remainder of the array. Extracting the minimum or maximum value is what heaps are good at.
> **Note:** This process is very similar to [selection sort](../Selection%20Sort/), which repeatedly looks for the minimum item in the remainder of the array. Extracting the minimum or maximum value is what heaps are good at.
Performance of heap sort is **O(n lg n)** in best, worst, and average case. Because we modify the array directly, heap sort can be performed in-place. But it is not a stable sort: the relative order of identical elements is not preserved.

Expand Down
28 changes: 14 additions & 14 deletions Huffman Coding/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ If you count how often each byte appears, you can clearly see that some bytes oc
c: 2 p: 1
r: 2 e: 1
n: 2 i: 1

We can assign bit strings to each of these bytes. The more common a byte is, the fewer bits we assign to it. We might get something like this:

space: 5 010 u: 1 11001
Expand All @@ -30,12 +30,12 @@ We can assign bit strings to each of these bytes. The more common a byte is, the

Now if we replace the original bytes with these bit strings, the compressed output becomes:

101 000 010 111 11001 0011 10001 010 0010 000 1001 11010 101
101 000 010 111 11001 0011 10001 010 0010 000 1001 11010 101
s o _ m u c h _ w o r d s

010 0010 000 0010 010 111 11011 0110 01111 010 0011 000 111
_ w o w _ m a n y _ c o m

11000 1001 01110 101 101 10000 000 0110 0
p r e s s i o n

Expand All @@ -57,7 +57,7 @@ The edges between the nodes either say "1" or "0". These correspond to the bit-e

Compression is then a matter of looping through the input bytes, and for each byte traverse the tree from the root node to that byte's leaf node. Every time we take a left branch, we emit a 1-bit. When we take a right branch, we emit a 0-bit.

For example, to go from the root node to `c`, we go right (`0`), right again (`0`), left (`1`), and left again (`1`). So the Huffman code for `c` is `0011`.
For example, to go from the root node to `c`, we go right (`0`), right again (`0`), left (`1`), and left again (`1`). So the Huffman code for `c` is `0011`.

Decompression works in exactly the opposite way. It reads the compressed bits one-by-one and traverses the tree until we get to a leaf node. The value of that leaf node is the uncompressed byte. For example, if the bits are `11010`, we start at the root and go left, left again, right, left, and a final right to end up at `d`.

Expand Down Expand Up @@ -137,7 +137,7 @@ Here are the definitions we need:
```swift
class Huffman {
typealias NodeIndex = Int

struct Node {
var count = 0
var index: NodeIndex = -1
Expand All @@ -152,7 +152,7 @@ class Huffman {
}
```

The tree structure is stored in the `tree` array and will be made up of `Node` objects. Since this is a [binary tree](../Binary Tree/), each node needs two children, `left` and `right`, and a reference back to its `parent` node. Unlike a typical binary tree, however, these nodes don't to use pointers to refer to each other but simple integer indices in the `tree` array. (We also store the array `index` of the node itself; the reason for this will become clear later.)
The tree structure is stored in the `tree` array and will be made up of `Node` objects. Since this is a [binary tree](../Binary%20Tree/), each node needs two children, `left` and `right`, and a reference back to its `parent` node. Unlike a typical binary tree, however, these nodes don't to use pointers to refer to each other but simple integer indices in the `tree` array. (We also store the array `index` of the node itself; the reason for this will become clear later.)

Note that `tree` currently has room for 256 entries. These are for the leaf nodes because there are 256 possible byte values. Of course, not all of those may end up being used, depending on the input data. Later, we'll add more nodes as we build up the actual tree. For the moment there isn't a tree yet, just 256 separate leaf nodes with no connections between them. All the node counts are 0.

Expand Down Expand Up @@ -183,7 +183,7 @@ Instead, we'll add a method to export the frequency table without all the pieces
var byte: UInt8 = 0
var count = 0
}

func frequencyTable() -> [Freq] {
var a = [Freq]()
for i in 0..<256 where tree[i].count > 0 {
Expand All @@ -209,7 +209,7 @@ To build the tree, we do the following:
2. Create a new parent node that links these two nodes together.
3. This repeats over and over until only one node with no parent remains. This becomes the root node of the tree.

This is an ideal place to use a [priority queue](../Priority Queue/). A priority queue is a data structure that is optimized so that finding the minimum value is always very fast. Here, we repeatedly need to find the node with the smallest count.
This is an ideal place to use a [priority queue](../Priority%20Queue/). A priority queue is a data structure that is optimized so that finding the minimum value is always very fast. Here, we repeatedly need to find the node with the smallest count.

The function `buildTree()` then becomes:

Expand All @@ -233,7 +233,7 @@ The function `buildTree()` then becomes:

tree[node1.index].parent = parentNode.index // 4
tree[node2.index].parent = parentNode.index

queue.enqueue(parentNode) // 5
}

Expand Down Expand Up @@ -286,7 +286,7 @@ Now that we know how to build the compression tree from the frequency table, we
}
```

This first calls `countByteFrequency()` to build the frequency table, then `buildTree()` to put together the compression tree. It also creates a `BitWriter` object for writing individual bits.
This first calls `countByteFrequency()` to build the frequency table, then `buildTree()` to put together the compression tree. It also creates a `BitWriter` object for writing individual bits.

Then it loops through the entire input and for each byte calls `traverseTree()`. That method will step through the tree nodes and for each node write a 1 or 0 bit. Finally, we return the `BitWriter`'s data object.

Expand All @@ -309,7 +309,7 @@ The interesting stuff happens in `traverseTree()`. This is a recursive method:
}
```

When we call this method from `compressData()`, the `nodeIndex` parameter is the array index of the leaf node for the byte that we're about to encode. This method recursively walks the tree from a leaf node up to the root, and then back again.
When we call this method from `compressData()`, the `nodeIndex` parameter is the array index of the leaf node for the byte that we're about to encode. This method recursively walks the tree from a leaf node up to the root, and then back again.

As we're going back from the root to the leaf node, we write a 1 bit or a 0 bit for every node we encounter. If a child is the left node, we emit a 1; if it's the right node, we emit a 0.

Expand Down Expand Up @@ -395,10 +395,10 @@ Here's how you would use the decompression method:

```swift
let frequencyTable = huffman1.frequencyTable()

let huffman2 = Huffman()
let decompressedData = huffman2.decompressData(compressedData, frequencyTable: frequencyTable)

let s2 = String(data: decompressedData, encoding: NSUTF8StringEncoding)!
```

Expand Down
Loading

0 comments on commit 3f7aaa2

Please sign in to comment.