diff --git a/_cs118/118complete.md b/_cs118/118complete.md index 6ac50702..fe9945c9 100644 --- a/_cs118/118complete.md +++ b/_cs118/118complete.md @@ -1,5 +1,5 @@ --- -layout: 118/CS118 +layout: CS118 part: false title: 118 One Page Notes --- \ No newline at end of file diff --git a/_cs118/combined.md b/_cs118/combined.md index 09fc1078..c54b1cbd 100644 --- a/_cs118/combined.md +++ b/_cs118/combined.md @@ -1,5 +1,5 @@ --- -layout: 118/CS118 +layout: CS118 math: true title: CS118 Combined Notes - Edmund Goodman --- diff --git a/_cs118/cribSheet.md b/_cs118/cribSheet.md index 1113eb99..a9a0ea68 100644 --- a/_cs118/cribSheet.md +++ b/_cs118/cribSheet.md @@ -1,5 +1,5 @@ --- -layout: 118/CS118 +layout: CS118 math: true title: CS118 Crib Sheet - Edmund Goodman --- diff --git a/_cs118/index.md b/_cs118/index.md index 70859175..7be19e38 100755 --- a/_cs118/index.md +++ b/_cs118/index.md @@ -1,5 +1,5 @@ --- -layout: 118/CS118 +layout: default title: CS118 --- diff --git a/_cs118/part1.md b/_cs118/part1.md index 5136a122..dd71701f 100644 --- a/_cs118/part1.md +++ b/_cs118/part1.md @@ -1,8 +1,8 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: Variables, Number Systems, and I/O -nextt: part2.html +nex: part2 --- # Variables, Number Systems, and I/O diff --git a/_cs118/part2.md b/_cs118/part2.md index a5f746d7..ab5735a7 100644 --- a/_cs118/part2.md +++ b/_cs118/part2.md @@ -1,9 +1,9 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: "Conditional Statements" -prev: part1.html -nextt: part3.html +pre: part1 +nex: part3 --- # Conditional Statements diff --git a/_cs118/part3.md b/_cs118/part3.md index c99aff7b..43957429 100644 --- a/_cs118/part3.md +++ b/_cs118/part3.md @@ -1,9 +1,9 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: "Arrays, Methods, Scope, and Recursion" -prev: part2.html -nextt: part4.html +pre: part2 +nex: part4 --- # Arrays, Methods, Scope, and Recursion diff --git a/_cs118/part4.md b/_cs118/part4.md index 8ac3b96c..fbab4de6 100755 --- a/_cs118/part4.md +++ b/_cs118/part4.md @@ -1,9 +1,9 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: "Object Oriented Programming" -prev: part3.html -nextt: part5.html +pre: part3 +nex: part5 --- # Object Oriented Programming (OOP) diff --git a/_cs118/part5.md b/_cs118/part5.md index 91587cb7..96271cdb 100755 --- a/_cs118/part5.md +++ b/_cs118/part5.md @@ -1,9 +1,9 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: "Inheritance, Abstract Classes, and Interfaces" -prev: part4.html -nextt: part6.html +pre: part4 +nex: part6 --- # Inheritance, Abstract Classes, and Interfaces diff --git a/_cs118/part6.md b/_cs118/part6.md index c7588e5b..18a4b2c5 100644 --- a/_cs118/part6.md +++ b/_cs118/part6.md @@ -1,9 +1,9 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: "Error Handling and Exceptions" -prev: part5.html -nextt: part7.html +pre: part5 +nex: part7 --- # Error Handling and Exceptions diff --git a/_cs118/part7.md b/_cs118/part7.md index c93db54b..3f6b11d9 100644 --- a/_cs118/part7.md +++ b/_cs118/part7.md @@ -1,8 +1,8 @@ --- -layout: 118/CS118 +layout: CS118 part: true title: "Generics and the Java Class Library" -prev: part6.html +pre: part6 --- # Generics and the Java Class Library diff --git a/_includes/partnav.html b/_includes/partnav.html new file mode 100644 index 00000000..95f2686c --- /dev/null +++ b/_includes/partnav.html @@ -0,0 +1,7 @@ +{%- if include.pre != "null" -%} +👈Prev +{%- endif -%} +🏡{{ include.mod }} +{%- if include.nex != "null" -%} +Next👉 +{%- endif -%} \ No newline at end of file diff --git a/_includes/toc.html b/_includes/toc.html new file mode 100644 index 00000000..8c710072 --- /dev/null +++ b/_includes/toc.html @@ -0,0 +1,182 @@ +{% capture tocWorkspace %} + {% comment %} + Copyright (c) 2017 Vladimir "allejo" Jimenez + + Permission is hereby granted, free of charge, to any person + obtaining a copy of this software and associated documentation + files (the "Software"), to deal in the Software without + restriction, including without limitation the rights to use, + copy, modify, merge, publish, distribute, sublicense, and/or sell + copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following + conditions: + + The above copyright notice and this permission notice shall be + included in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES + OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + {% endcomment %} + {% comment %} + Version 1.1.0 + https://github.com/allejo/jekyll-toc + + "...like all things liquid - where there's a will, and ~36 hours to spare, there's usually a/some way" ~jaybe + + Usage: + {% include toc.html html=content sanitize=true class="inline_toc" id="my_toc" h_min=2 h_max=3 %} + + Parameters: + * html (string) - the HTML of compiled markdown generated by kramdown in Jekyll + + Optional Parameters: + * sanitize (bool) : false - when set to true, the headers will be stripped of any HTML in the TOC + * class (string) : '' - a CSS class assigned to the TOC + * id (string) : '' - an ID to assigned to the TOC + * h_min (int) : 1 - the minimum TOC header level to use; any header lower than this value will be ignored + * h_max (int) : 6 - the maximum TOC header level to use; any header greater than this value will be ignored + * ordered (bool) : false - when set to true, an ordered list will be outputted instead of an unordered list + * item_class (string) : '' - add custom class(es) for each list item; has support for '%level%' placeholder, which is the current heading level + * submenu_class (string) : '' - add custom class(es) for each child group of headings; has support for '%level%' placeholder which is the current "submenu" heading level + * base_url (string) : '' - add a base url to the TOC links for when your TOC is on another page than the actual content + * anchor_class (string) : '' - add custom class(es) for each anchor element + * skip_no_ids (bool) : false - skip headers that do not have an `id` attribute + + Output: + An ordered or unordered list representing the table of contents of a markdown block. This snippet will only + generate the table of contents and will NOT output the markdown given to it + {% endcomment %} + + {% capture newline %} + {% endcapture %} + {% assign newline = newline | rstrip %} + + {% capture deprecation_warnings %}{% endcapture %} + + {% if include.baseurl %} + {% capture deprecation_warnings %}{{ deprecation_warnings }}{{ newline }}{% endcapture %} + {% endif %} + + {% if include.skipNoIDs %} + {% capture deprecation_warnings %}{{ deprecation_warnings }}{{ newline }}{% endcapture %} + {% endif %} + + {% capture jekyll_toc %}{% endcapture %} + {% assign orderedList = include.ordered | default: false %} + {% assign baseURL = include.base_url | default: include.baseurl | default: '' %} + {% assign skipNoIDs = include.skip_no_ids | default: include.skipNoIDs | default: false %} + {% assign minHeader = include.h_min | default: 1 %} + {% assign maxHeader = include.h_max | default: 6 %} + {% assign nodes = include.html | strip | split: ' maxHeader %} + {% continue %} + {% endif %} + + {% assign _workspace = node | split: '' | first }}>{% endcapture %} + {% assign header = _workspace[0] | replace: _hAttrToStrip, '' %} + + {% if include.item_class and include.item_class != blank %} + {% capture listItemClass %} class="{{ include.item_class | replace: '%level%', currLevel | split: '.' | join: ' ' }}"{% endcapture %} + {% endif %} + + {% if include.submenu_class and include.submenu_class != blank %} + {% assign subMenuLevel = currLevel | minus: 1 %} + {% capture subMenuClass %} class="{{ include.submenu_class | replace: '%level%', subMenuLevel | split: '.' | join: ' ' }}"{% endcapture %} + {% endif %} + + {% capture anchorBody %}{% if include.sanitize %}{{ header | strip_html }}{% else %}{{ header }}{% endif %}{% endcapture %} + + {% if htmlID %} + {% capture anchorAttributes %} href="{% if baseURL %}{{ baseURL }}{% endif %}#{{ htmlID }}"{% endcapture %} + + {% if include.anchor_class %} + {% capture anchorAttributes %}{{ anchorAttributes }} class="{{ include.anchor_class | split: '.' | join: ' ' }}"{% endcapture %} + {% endif %} + + {% capture listItem %}{{ anchorBody }}{% endcapture %} + {% elsif skipNoIDs == true %} + {% continue %} + {% else %} + {% capture listItem %}{{ anchorBody }}{% endcapture %} + {% endif %} + + {% if currLevel > lastLevel %} + {% capture jekyll_toc %}{{ jekyll_toc }}<{{ listModifier }}{{ subMenuClass }}>{% endcapture %} + {% elsif currLevel < lastLevel %} + {% assign repeatCount = lastLevel | minus: currLevel %} + + {% for i in (1..repeatCount) %} + {% capture jekyll_toc %}{{ jekyll_toc }}{% endcapture %} + {% endfor %} + + {% capture jekyll_toc %}{{ jekyll_toc }}{% endcapture %} + {% else %} + {% capture jekyll_toc %}{{ jekyll_toc }}{% endcapture %} + {% endif %} + + {% capture jekyll_toc %}{{ jekyll_toc }}{{ listItem }}{% endcapture %} + + {% assign lastLevel = currLevel %} + {% assign firstHeader = false %} + {% endfor %} + + {% assign repeatCount = minHeader | minus: 1 %} + {% assign repeatCount = lastLevel | minus: repeatCount %} + {% for i in (1..repeatCount) %} + {% capture jekyll_toc %}{{ jekyll_toc }}{% endcapture %} + {% endfor %} + + {% if jekyll_toc != '' %} + {% assign rootAttributes = '' %} + {% if include.class and include.class != blank %} + {% capture rootAttributes %} class="{{ include.class | split: '.' | join: ' ' }}"{% endcapture %} + {% endif %} + + {% if include.id and include.id != blank %} + {% capture rootAttributes %}{{ rootAttributes }} id="{{ include.id }}"{% endcapture %} + {% endif %} + + {% if rootAttributes %} + {% assign nodes = jekyll_toc | split: '>' %} + {% capture jekyll_toc %}<{{ listModifier }}{{ rootAttributes }}>{{ nodes | shift | join: '>' }}>{% endcapture %} + {% endif %} + {% endif %} +{% endcapture %}{% assign tocWorkspace = '' %}{{ deprecation_warnings }}{{ jekyll_toc }} diff --git a/_layouts/118/CS118.html b/_layouts/CS118.html similarity index 66% rename from _layouts/118/CS118.html rename to _layouts/CS118.html index 9e6f4455..92a14f52 100644 --- a/_layouts/118/CS118.html +++ b/_layouts/CS118.html @@ -2,15 +2,7 @@ layout: notes --- -{%- if page.part == true -%} - {% capture previous %}{{ page.prev | default: "null" }}{% endcapture %} - - {% capture nextp %}{{ page.nextt | default: "null" }}{% endcapture %} - - {% include 118/118nav.html nextt=nextp prev=previous %} - - {{ content }} -{%- elsif page.part == false -%} +{%- if page.part == false -%} 🏡CS118 Home {%- for page in site.cs118 -%} {%- if page.part == true -%} diff --git a/_layouts/126/CS126.html b/_layouts/CS126.html similarity index 100% rename from _layouts/126/CS126.html rename to _layouts/CS126.html diff --git a/_layouts/130/CS130.html b/_layouts/CS130.html similarity index 100% rename from _layouts/130/CS130.html rename to _layouts/CS130.html diff --git a/_layouts/132/CS132.html b/_layouts/CS132.html similarity index 100% rename from _layouts/132/CS132.html rename to _layouts/CS132.html diff --git a/_layouts/notes.html b/_layouts/notes.html index 449db91e..1300943d 100644 --- a/_layouts/notes.html +++ b/_layouts/notes.html @@ -16,6 +16,8 @@ {%- if page.math == true -%} {%- endif -%} + + {% seo %} @@ -38,21 +40,44 @@

{{ page.description }}

Download .tar.gz {% endif %} - -
- {{ content }} - - - -
+
+ +
+ +
+
+
+
+ {%- if page.part == true -%} + {% capture previous %}{{ page.pre | default: "null" }}{% endcapture %} + {% capture next %}{{ page.nex | default: "null" }}{% endcapture %} + {%- capture mod -%}{{ page.layout | default: "null"}}{%- endcapture -%} + + {% include partnav.html nex=next pre=previous mod=mod %} + {%- endif -%} +
+ + {{ content }} + + +
+
+
\ No newline at end of file diff --git a/assets/css/style.scss b/assets/css/style.scss index ec567a9e..9762b39a 100644 --- a/assets/css/style.scss +++ b/assets/css/style.scss @@ -3,6 +3,10 @@ @import "{{ site.theme }}"; +:root { + --buttonCol-width: 2em; +} + .center { display: block; margin-left: auto; @@ -15,6 +19,7 @@ margin-right: auto; } +/// Custom CSS properties for
tag details { cursor: pointer; padding: 1em; @@ -27,7 +32,7 @@ summary { font-weight: bold; } -// Image that grows when hovered +/// Image that grows when hovered .growimg { transition:transform 0.5s ease-in-out; transform-origin: top; @@ -39,6 +44,7 @@ summary { transform:scale(1.5); } +/// Overwrite default props for blockquotes .main-content blockquote { width: 100%; color: black; @@ -49,6 +55,7 @@ summary { margin: 1rem 0rem 1.5rem 0rem; } +// Used to signify that a certain blockquote is extra information blockquote.extra { border-left: 4px solid darkgoldenrod; } @@ -56,3 +63,106 @@ blockquote.extra { blockquote blockquote { padding-right: 0; } + +/// Grid outline for notes layout +.container { + display: grid; + width: 100%; + grid-template-columns: min-content var(--buttonCol-width) 1fr; + grid-template-areas: "nav button content"; + transition: width 0.5s ease; +} + +// Container that contains sidenav menu +.navBox { + grid-area: nav; + -webkit-transition: width 0.5s ease; + -moz-transition: width 0.5s ease; + -o-transition: width 0.5s ease; + transition: width 0.5s ease; +} + +// Actual sidenav menu - separated the 2 to allow for sticky sidenav +.sideNav { + width: 30vw; + height: 100vh; + overflow: scroll; + position: sticky; + top: 0; + background-color: whitesmoke; + padding-top: 1%; + -webkit-transition: width 0.5s ease; + -moz-transition: width 0.5s ease; + -o-transition: width 0.5s ease; + transition: width 0.5s ease; +} + +.closedNav { + width: 0px; +} + +/// NavBar toggle button +.buttonCol { + grid-area: button; + cursor: pointer; + background-color: whitesmoke; +} + +// The arrow symbol you see, following css properties are all for the rotation +.navArrow { + position: sticky; + top: 50vh; + width: var(--buttonCol-width); + height: var(--buttonCol-width); + display: block; +} + +.navArrow i { + position: absolute; + --arrow-width: calc(var(--buttonCol-width) * 0.4); + left: calc((var(--buttonCol-width) - var(--arrow-width)) / 2); // Align i to the center of navArrow div +} + +.navArrow i::before, +.navArrow i::after { + content: ''; + position: absolute; + width: var(--arrow-width); // both i and i::after will have length = 0.67 * width of container + height: 4px; + border-radius: 20px; + background-color: grey; +} + +.navArrow i::before { + transform: rotate(22.5deg); +} + +.navArrow i::after { + top: calc(var(--arrow-width) * 0.34); + transform: rotate(-22.5deg); +} + +.open i::before { + transform: rotate(-22.5deg); +} + +.open i::after { + transform: rotate(22.5deg); +} + +/// Contents of grid -- main content of markdown usually +.contents { + grid-area: content; +} + +.partNav { + display: flex; + z-index: 1; + justify-content: space-between; + background-color: white; + position: sticky; + font-size: 20px; + top: 0px; +} + + diff --git a/assets/js/tocNav.js b/assets/js/tocNav.js new file mode 100644 index 00000000..c0114d79 --- /dev/null +++ b/assets/js/tocNav.js @@ -0,0 +1,6 @@ +function toggleNav() { + document.querySelector(".sideNav").classList.toggle("closedNav"); + document.querySelector(".navArrow").classList.toggle("open"); +} + + diff --git a/cs126/part1.md b/cs126/part1.md index abef2542..92caaa04 100644 --- a/cs126/part1.md +++ b/cs126/part1.md @@ -1,16 +1,11 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Arrays and Lists" -nextt: part2.html +next: part2 --- -
- -# Table of contents -* TOC -{:toc} # Arrays (ADT) - Indexable fixed length sequence of variables of the type, stored contiguously diff --git a/cs126/part10.md b/cs126/part10.md index 6d5a88f8..a221726c 100644 --- a/cs126/part10.md +++ b/cs126/part10.md @@ -1,14 +1,8 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Self-balancing trees" -prev: part9.html -nextt: part11.html +pre: part9 +next: part11 --- - -
- -# Table of contents -* TOC -{:toc} diff --git a/cs126/part11.md b/cs126/part11.md index b25c2b1f..6a55d9c7 100644 --- a/cs126/part11.md +++ b/cs126/part11.md @@ -1,14 +1,8 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Graphs" -prev: part10.html -nextt: part12.html +pre: part10 +nex: part12 --- - -
- -# Table of contents -* TOC -{:toc} diff --git a/cs126/part12.md b/cs126/part12.md index eee785e0..766388ee 100644 --- a/cs126/part12.md +++ b/cs126/part12.md @@ -1,16 +1,11 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "General algorithms" -prev: part11.html +pre: part11 --- -
- -# Table of contents -* TOC -{:toc} # Sorting data structures diff --git a/cs126/part2.md b/cs126/part2.md index 2b3b7ea1..ef881f9a 100755 --- a/cs126/part2.md +++ b/cs126/part2.md @@ -1,18 +1,14 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Analysis of algorithms" -prev: part1.html -nextt: part3.html +pre: part1 +nex: part3 ---
-# Table of contents - -* TOC -{:toc} # Running time - To assess how good an algorithm is, we often use the metric of running time compared with the size of the input to the algorithm @@ -119,4 +115,3 @@ nextt: part3.html - $$f(n) \geq g(n)$$ in the limit of $$n \rightarrow \infin$$ - Big-Theta gives "asymptotically tight" $$\approx$$ average - $$f(n) = g(n)$$ in the limit of $$n \rightarrow \infin$$ - diff --git a/cs126/part3.md b/cs126/part3.md index e092b0b5..2a9eb7b7 100644 --- a/cs126/part3.md +++ b/cs126/part3.md @@ -1,14 +1,8 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Recursive algorithms" -prev: part2.html -nextt: part4.html +pre: part2 +nex: part4 --- - -
- -# Table of contents -* TOC -{:toc} diff --git a/cs126/part4.md b/cs126/part4.md index 59046f29..659930e3 100644 --- a/cs126/part4.md +++ b/cs126/part4.md @@ -1,17 +1,12 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Stacks and Queues" -prev: part3.html -nextt: part5.html +pre: part3 +nex: part5 --- -
- -# Table of contents -* TOC -{:toc} # Stacks (ADT) - A "Last in, first out" (LIFO) data structure - both insertions and deletions occur at the front of the stack diff --git a/cs126/part5.md b/cs126/part5.md index 94986dc5..6133a45d 100644 --- a/cs126/part5.md +++ b/cs126/part5.md @@ -1,17 +1,12 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Maps, hash tables and sets" -prev: part4.html -nextt: part6.html +pre: part4 +nex: part6 --- -
- -# Table of contents -* TOC -{:toc} # Maps (ADT) - "Searchable collection of key-value entries" (*Data Structures and Algorithms in Java*, Goodrich, Tamassia, Goldwasser) diff --git a/cs126/part6.md b/cs126/part6.md index 2db90e29..13fe685d 100644 --- a/cs126/part6.md +++ b/cs126/part6.md @@ -1,17 +1,12 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "" -prev: part5.html -nextt: part7.html +pre: part5 +nex: part7 --- -
- -# Table of contents -* TOC -{:toc} # Trees (ADT) - "A tree is an abstract model of a hierarchical structure" *Data Structures and Algorithms in Java*, Goodrich, Tamassia, Goldwasser diff --git a/cs126/part7.md b/cs126/part7.md index 9d316918..bc73e7a9 100644 --- a/cs126/part7.md +++ b/cs126/part7.md @@ -1,17 +1,12 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Priority queues" -prev: part6.html -nextt: part8.html +pre: part6 +nex: part8 --- -
- -# Table of contents -* TOC -{:toc} # Priority queues (ADT) diff --git a/cs126/part8.md b/cs126/part8.md index fc3b4e5a..7775efe0 100644 --- a/cs126/part8.md +++ b/cs126/part8.md @@ -1,17 +1,12 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Heaps" -prev: part7.html -nextt: part9.html +pre: part7 +nex: part9 --- -
- -# Table of contents -* TOC -{:toc} # Heaps (ADT) - A heap is a binary tree storing keys at its nodes and satisfying the following properties: diff --git a/cs126/part9.md b/cs126/part9.md index b4988fd4..0608e2a9 100644 --- a/cs126/part9.md +++ b/cs126/part9.md @@ -1,14 +1,8 @@ --- -layout: 126/CS126 +layout: CS126 part: true math: true title: "Skip lists" -prev: part8.html -nextt: part10.html +pre: part8 +nex: part10 --- - -
- -# Table of contents -* TOC -{:toc} diff --git a/cs130/cribSheet.md b/cs130/cribSheet.md index e835239c..0d4e7642 100755 --- a/cs130/cribSheet.md +++ b/cs130/cribSheet.md @@ -1,13 +1,10 @@ --- -layout: 130/CS130 +layout: CS130 math: true title: "Crib sheet for CS130 - Edmund Goodman" --- -* TOC -{:toc} - # Logic and predicates ## Material implication diff --git a/cs130/proofApproaches.md b/cs130/proofApproaches.md index df2083fc..e6161efd 100755 --- a/cs130/proofApproaches.md +++ b/cs130/proofApproaches.md @@ -1,13 +1,10 @@ --- -layout: 130/CS130 +layout: CS130 math: true title: "Proof approaches - Edmund Goodman" --- -* TOC -{:toc} - # Direct proof To prove a claim $$P \implies Q$$: diff --git a/cs132/index.md b/cs132/index.md index a1ae0421..2e11678a 100755 --- a/cs132/index.md +++ b/cs132/index.md @@ -28,7 +28,7 @@ These notes will likely take the same/similar form to the CS118 notes. The prima 1. [x] [Data representation](part1.html) 2. [x] [Digital logic](part2.html) -3. [ ] Assembler -4. [ ] Memory systems -5. [ ] I/O mechanisms -6. [ ] Processor architecture \ No newline at end of file +3. [x] [Assembler](part3.html) +4. [x] [Memory systems](part4.html) +5. [x] [I/O mechanisms](part5.html) +6. [ ] Processor architecture diff --git a/cs132/part1.md b/cs132/part1.md index ebdceed7..eaf2ee1d 100644 --- a/cs132/part1.md +++ b/cs132/part1.md @@ -1,11 +1,11 @@ --- -layout: notes +layout: CS132 title: Data Representation math: true +part: true +nex: part2 --- -* TOC -{:toc} ## Representation and number systems In terms of the exam, the most important concept is **value versus representation** of any number. In practice, this means you need to accept that you cannot always represent a value across different bases using the same number of symbols. diff --git a/cs132/part2.md b/cs132/part2.md index d15851a9..ee3a5ac8 100644 --- a/cs132/part2.md +++ b/cs132/part2.md @@ -1,8 +1,11 @@ --- -layout: 132/CS132 +layout: CS132 slides: true math: true title: Digital Logic +part: true +pre: part1 +nex: part3 --- # Logic Gates, Circuits and Truth tables @@ -384,4 +387,4 @@ Programmable logic devices allow much larger circuits to be created inside a sin ## PLA -Works by providing links/fuses that can be broken to produce a custom sum of products. As long as you are able to understand circuit diagrams you should be able to understand how you arrive at the sum of products for each output of the PLA. +Works by providing links/fuses that can be broken to produce a custom sum of products. As long as you are able to understand circuit diagrams you should be able to understand how you arrive at the sum of products for each output of the PLA. \ No newline at end of file diff --git a/cs132/part3.md b/cs132/part3.md index b99ce2b2..f8177e18 100644 --- a/cs132/part3.md +++ b/cs132/part3.md @@ -1,13 +1,12 @@ --- -layout: 132/CS132 -slides: true -layout: notes +layout: CS132 math: true title: Assembler +part: true +pre: part2 +nex: part4 --- -# Assembler - -## Microprocessor Fundamentals +# Microprocessor Fundamentals Before diving into assembler, we need to be familiar with the **key components of all CPUs**. No matter how complex a CPU is, they always have the two following components. @@ -19,9 +18,243 @@ Before diving into assembler, we need to be familiar with the **key components o > - Decode to form recognisable operations > - Execute to impact the current state -### Learn the fetch-decode-execute cycle. Think of it every time you look at a CPU, or a series of instructions. Think about which of the components (the CU or the ALU) are operating and when. +❕❗ **Learn the fetch-decode-execute cycle**. Think of it every time you look at a CPU, or a series of instructions. Think about which of the components (the CU or the ALU) are operating and when. + +The instruction cycle takes place over **several CPU clock cycles** – the same clock cycles we saw in **sequential logic circuits**. The FDE cycle relies on several CPU components interacting with one another. + +## FDE Components +There are several components that make up the FDE cycle: +- ALU +- CU +- **Program Counter** (PC): this tracks the **memory address** of the **next instruction** for execution +- **Instruction Register** (IR): contains the **most recent instruction** fetched +- **Memory Address Register** (MAR): contains the address of the _region_ of memory for read/write purposes +- **Memory Data Register** (MDR): contains **fetched data** from memory or **data ready to be written** to memory. The MDR is also sometimes referred to as the Memory Buffer Register (MBR). + +> Remember that the **Control Unit** is connected to all components + +A typical instruction cycle may look something like this: + +| Fetch | Decode | Execute | +| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | +| 1. Instruction Received from memory location in PC
2. Retrieved instruction stored in IR
3. PC incremented to point to next instruction in memory | 1. Opcode retrieved / instruction decoded
2. Read effective address to establish opcode type | 1. CU signals functional CPU components
2. Can result in changes to data registers, such as the PC etc.
3. PC incremented to point to next instruction in memory | + +# Registers + +Now that we have the FDE cycle established, we need **registers** to help store intermediate information- this can either be in the form of memory or system flags. The Motorola 68008 will be used to give context to each type of register: + +> You can think of a register as a parallel set of bits which can be toggled on or off. + +## Data registers +- These are useful for storing **frequently used values** or **intermediate results** of calculations. +- You typically **only need one** data register **on chip** – however, the advantage of having many registers is that **fewer references to external memory are needed**. + +> The 68008 has 32 bit data registers. This is a _long_ register; 16 bits form a _word_, and 8 bits form a _byte_. + +## Status registers +- These have various status bits that are set or reset by the **ALU**. +- They are a _set of flags_: + - Half are for the **system** (CU) + - The **conditional control register** is a **subset of flags** + +| ⬅ System byte ➡ | ⬅ User byte ➡ | +| :-------------: | :-------------------------------------------: | +| 8 bits | 8 bits, where a few bits will make up the CCR | + +> The CCR is made up of several bits representing statuses such as _extend, negative, zero, overflow, carry_. If you wanted to check the status of the computer in a program, you could use bitwise **AND** against a bitmask (the string of bits you want toggled) and seeing if the final result is the flag you wanted to see. + +## Address register +- These are used as **pointer registers** in the calculation of operand addresses. +- Operations on these addresses **do not alter the CCR**. +- Only the **ALU** has the capacity to incur changes in status (through operations on non-addresses). + +### Stack pointer +- This is an **address register** that points to the **next free location**; it can hold **subroutine return addresses**. + +> The 68008 has pointer registers `A0-A6` whilst `A7` is used as a system stack pointer. + +## Program counter +We are already familiar with what the PC does – it is a **32 bit** register on the 68008 that keeps track of the address at which the next instruction will be found. + +> If you were writing a software emulator, think of the memory as an array of strings (each string is an opcode). The PC would be an integer; your code would access `memory[PC]` to find out which opcode to pull from the memory and decode. Therefore, by incrementing the PC (an 8-bit, 16-bit, or 32-bit integer in your code) you can increment through the memory array. You can sometimes increment the PC by multiple amounts. +> Generally speaking, if you were to be writing an emulator for any CPU, you _could_ represent each register as an n-bit unsigned integer as you can toggle bits and perform bitwise operations, including bitshifts, on each integer variable. You would typically want to implement memory as a simple array of m-bit integers, where m is the word length of your CPU. + +# Register Transfer Language + +> RTL is used to describe the operations of the microprocessor as it is executing program instructions. +> It is also a way of making sure we access the correct parts of the microprocessor – **do not confuse it with assembler instructions**. + +| Example RTL | Meaning | +|-------------|---------| +| `[MAR] ⬅ [PC]` | _Transfer_ the contents of the PC **to** the MAR | +| `[MS(12345)]` | The _contents_ of memory _location_ 12345 in the _main store_ | +| `[D1(0:7)] <- [D0(0:7)]` | Transfer the contents of the 1st 8bits of `D0` to the 1st 8bits of `D1` | + +### Example: Instruction fetching +Given a series of instructions in words, we can find a way to represent this in RTL. Consider the following example: + +| Plain words | RTL equivalent | +|-------------|----------------| +| Contents of PC transferred to MAR address buffers | `[MAR] ⬅ [PC]` | +| Increment the PC | `[PC] ⬅ [PC] + 1` | +| Load MBR from external memory, and set $$R / \bar W$$ to Read | `[MBR] ⬅ [MS([MAR])]`; $$R / \bar W$$ to Read | +| Transfer opcode to IR from MBR | `[IR] ⬅ [MBR]` | +| Decode the instruction | `CU ⬅ [IR(opcode)]` | + +If you wanted to add a constant byte to a register (take `D0` from the 68008), you would engage the ALU and then transfer this into a register: +``` +{ continue previous cycle } +[MBR] ⬅ [MS([MAR])] +ALU ⬅ [MBR] + D0 +[DO] ⬅ ALU +``` +As you can see, RTL describes how we can specifically set values in registers and interact with components in a standardised language. + +# Assembly Language + +*You should be able to explain the motivations, applications, and characteristics of high-level and low-level programming languages.* + +Code written in high-level programming languages typically go through a compiler, or for some languages like Python an [interpreter](https://www.computerscience.gcse.guru/theory/translators) (FYI only), and is eventually **translated** into machine code that your microprocessor understands. Low-level assembly code is **assembled** by an assembler into machine code. + +
+ Sometimes, the compilation process first compiles code into a lower-level assembly language and then the assembler assembles it into machine code, but in other cases high-level languages can be translated directly to machine code. + I previously had the misunderstanding that high-level languages are + **always** compiled to some kind of assembler language and is then + assembled to machine code, but this is not the case. +
+ +The **motivation** for low-level languages is to give programmers more **control** of how the microprocessor executes a particular program, as it allows you to define the exact sequence of instructions that will be executed by the microprocessor. High-level programming languages don’t have the capability to provide such specific instructions. Sometimes, this means that the resultant machine code has **greater performance** than one that was compiled from a high-level language. + +| High-level Language | Machine Code | Assembler Language | +| :----------------------------------------------------------: | :-----------: | :----------------------------------------------------------: | +| Human readable.
Difficult to translate into performant machine code whilst retaining original intention. | Not readable. | More readable than machine code but more precise than high-level languages. | + +> Assembly language saves us from machine code by using **mnemonics**. We can provide **memory locations** and **constants**, as well as **symbolic names**. These features are not afforded to us by RTL! + +## Assembler Format + +Assembly language typically takes the following form: + +| | Label (Optional) | Opcode | Operand | Comment | +|:-----:|:------:|:-------:|:-------:|:-------:| +| **Example** | `START:` | `move.b` | `#5, D0` | `|load D0 with 5` | +{: .centeredtable} + +## Assembly Language Conventions + +There are several conventions of Assembly language to keep in mind: + +| Number Symbol | Meaning | +|---------|-------------| +| `#` | Indicates a constant. A number without `#` is an address. By default, numbers are in base 10. | +| `$` | A **hex** value. E.g. `ORG $4B0 | this program starts at hex 4B0` | +| `%` | A **binary** value. E.g. `add.b #%11, D0 | add 3 to D0` | + +
+ +| Directives | Definition | Convention | Example | +|------------|------------|------------|---------| +| Label names | You can assign labels to represent bytes or instructions | Label or name followed by `:` | `ANS: DS.B 1` will leave 1 byte of memory empty and name it ANS | +| Defining storage (`DS`) | Instruct the assembler to reserve some memory | `DS.{data type} {amount}` | `DS.B 1` will leave 1 byte of memory free. See data types further on. | +| Origin (`ORG`) | Tells the assembler where in memory to start putting the instructions or data | `ORG` followed by value | `ORG $4B0` starts the program at hex `4B0` | + +If you want to string together an assembler instruction, you typically write them in the form +`operation.datatype` `source,` `destination` + +## Data types and assembler instructions + +Previously, we saw how the `DS` directive requires a data type and then an amount of data to set aside; Assembler language defines three types of data type: +- **8 bits / byte**: `.b` +- **2 bytes / word**: `.w` +- **4 bytes / long word**: `.l` + +> You can typically omit the data type and `.` if you are working with a **word**. + +# Instruction set aspects + +Generally speaking, there are two aspects to a CPU instruction set: +- **Instructions** which tell the processor which operations to perform + - Data movement: this is similar to what we have already seen with RTL + - Arithmetic instructions: keep in mind whether your CPU can operate on fractional numbers + - Logical instructions + - Branch instructions + - System control instructions +- **Addressing modes** tell the processor which ways it can access data or memory locations, or how they may be calculated by the CPU. + +> Addressing modes can provide data, specify where it is, and how to go find it. +> You may describe direct addresses, or relative addresses where you compare one address to another to find it. + +## Data Movement Instructions + +The `move` operations are similar to RTL, just pay attention to the data type. + +``` +move.b D0,D1 | [D1(0:7)] <- [D0(0:7)] +moveb D0,D1 | same +exg.b D4,D5 | exchange contents of two registers +swap D2 | swap lower and upper words of D2 +lea $F20,A3 | load effective address [A3] <- [$F20] +``` + +## Arithmetic Instructions + +Depending on your processor architecture, you may or may not have floating point support. + +``` +add.l Di,Dj | [Dj] <- [Di] + [Dj] +addx.w Di,Dj | also add in x bit from CCR +mulu.w Di,Dj | [Dj(0:31)] <- [Di(0:15)] * [Dj(0:15)] signed multiplication +``` + +You also have `sub` (subtract), `mulu` (unsigned mult), `divu` and `divs`. You don’t have to memorise or know these very well but the key takeaways are + +- The “variables” (around the comma `,`) are operated on sequentially (left to right). +- The result of the operation is stored in the second variable (after the comma `,`). +- You can add or subtract bits from the CCR +- Division and multiplication use the first half of the bits available (unless specified) because the resultant register has a fixed bit length (32 bits in the above example). + +## Logical instructions + +We can often use **bitmasks** to achieve our goals in conjunction with **bitwise operations**. + +``` +AND.B #%11110000, D3 | bitwise AND on 1111 0000 and first 8bits of D3 +``` + +Additional pointers: + +- **Shift operations** are fundamental; for example, you can multiply by 2 using left shift operations. +- Other operations such as rotations also exist. + +## Branch instructions +These are crucial for **control flow statements**; we typically branch based on **conditions set in the CCR**. + +``` +LDA NumA | Read the value "NumA" +CMP NumB | Compare against "NumB" +BCC Loc | Go to label "Loc" if "NumA" < "NumB", or in RTL: [PC] <- Loc +``` + +[Example](https://www.c64-wiki.com/wiki/BCC) for illustration purposes (we don’t need to know what `LDA` or `CMP` is exactly just roughly understand the syntax). Branch instructions cause the processor to branch (jump) to a labelled address. + +- CCR flags are set by the previous instruction +- The current instruction can test the state of the CCR bits and branch if a certain **condition** is met. + +## Subroutines and Stacks +Subroutines (`JSR`; jump, `RTS`; return) let you use the **same code repeatedly** reducing program size and improving readability. It is similar to functions. + +Typically when a subroutine is called (with `JSR `), the current address in the PC is **pushed** to a stack and your stack pointer points to the newly pushed address (current address). The address of the subroutine is “loaded” into the PC and the instructions in the subroutine is executed. + +When `RTS` is called, the stack is **popped** and the **popped address** is put into the PC; the stack pointer points to the next address at the top of the stack. + +# Addressing modes +As mentioned earlier, there are several ways for the CPU to access memory; you should be familiar with the following, and they are found on many CPUs (not just the 68008): -The instruction cycle takes place over **several CPU clock cycles**- the same clock cycles we saw in **sequential logic circuits**. The FDE cycle relies on several CPU components interacting with one another. +| Address type | Definition | Example | +|--------------|------------|---------| +| Direct address | Explicitly specifying two registers in the same command | `move D3, D2` | +| Immediate address | The **operand** forms part of the instruction and **remains constant** | `move.b #$42, D5` | +| Absolute address | This **specifies the destination location explicitly** | `move.l D2, $7FFF0` which moves the long value held in D2 to address `$7FFF0` | +| Relative address | These all **relate to the program counter** to write **position independent code** | | -### FDE Components -There are several +> Indirect addressing is never on the exam; however, this is where we add offsets, increments, or indexed addressing to access memory or data. \ No newline at end of file diff --git a/cs132/part4.assets/image-20210506180830464.png b/cs132/part4.assets/image-20210506180830464.png new file mode 100644 index 00000000..a149be69 Binary files /dev/null and b/cs132/part4.assets/image-20210506180830464.png differ diff --git a/cs132/part4.md b/cs132/part4.md new file mode 100644 index 00000000..e0b84559 --- /dev/null +++ b/cs132/part4.md @@ -0,0 +1,213 @@ +--- +slides: true +layout: CS132 +math: true +title: Memory Systems +part: true +pre: part3 +nex: part5 +--- + +# Memory Systems +## Memory hierarchy +When deciding on a memory technology, you must consider the following factors: +- Frequency of access +- Access time +- Capacity required +- Financial cost + +> The **designer's dilemma** is the conflict that is caused by choosing between low cost, high capacity storage or high cost, low capacity storage. +> Ideally, we would want our storage access to be frequent, quick, and spatially efficient- the balance of these three leads to the cost of the storage. + +Memory hierarchy diagram + +We know that roughly **90%** of memory accesses are within +-2KB of the previous program counter position. Therefore, we should only choose expensive memory **when we need it**, which is due to **spatial locality**. + +**Temporal locality** refers to the likelihood that a particular memory location will be referenced in the future. + +## Cache Memory +- Cache is **kept small to limit cost**; it is also **transparent to the programmer**. However, this does allow _some_ control over what is stored in it. +- A cache access is known as a **'cache hit'**. +- Cache speed is incredibly important- moving down the memory hierarchy will take orders of magnitude more time for similar memory hits. + +> **Moore's Law** is focused on the transistor count within integrated circuits. It states that this count doubles roughly every two years. +> Currently, single core frewuency is tailing off; this has lead the industry to focus on multicore performance instead. +> Comparitively, memory access speed is improving much more slowly; access time and capacity can become a huge bottleneck when it comes to creating performant systems. + +> Cache concepts are not included in these notes as they are not fully examined, and also do not feature in the revision videos. + +## Memory Cell Organisation +Now that we're familiar with different parts of the memory hierarchy, it's crucial that we understand how this memory is actually constructed (down to the metal almost). + +### Semiconductor Memory (main store) +Semiconductor memory is the most common form of main store memory, otherwise known as **RAM**. It can be broken up into several groups: +- **Static RAM** (SRAM) + - SRAM uses a **flip-flop** as storage element for each bit. +- **Dynamic RAM** (DRAM) + - For each bit, the **presence or absence of charge** in a capacitor to determine a `1` or `0`. + - The capacitor charge **leaks away over time**, which requires **periodic refreshing**. + - DRAM is typically cheaper than SRAM which is why we accommodate for the higher overhead. +> Refreshing DRAM incurs a **constant overhead**, which means that it **does not increase per bit**. + +Both **SRAM and DRAM are volatile** memory storage- therefore, power must continuously be applied. However, the similarities end there and it is crucial to recognise the differences between the two memory cells. + +> Always ask yourself about the cost of these memory technologies- it is the reason we have decided to use semiconductor memory as our main store. + +| SRAM cells | DRAM cells | +|------------|------------| +| Provides **better read/write times** | Generally simpler and more compact, which allows for **greater memory cell density** | +| **Cache memory**, both on and off chip, is implemented as SRAM | Cheaper to produce than equivalent SRAM memory, and hence is used for **main memory** | + +DRAM can be organised even further: +- Synchronous DRAM (SDRAM) +- Rambus DRAM (RDRAM) +- Double Data Rate Synchronous (DDR SDRAM) +- Cache DRAM (CDRAM) + +## Organising memory + +### Memory cells +Before we begin organising memory, it's useful to know what the individual memory cells will look like. Think of them as single boxes with the following properties: +- They only store two states (`1` or `0`). +- They are capable of being written to as well as read from. This is controlled by a $$R / \bar{W}$$ line which determines which direction the information will flow from. +- They are enabled when a single pin, such as a `SELECT` line, is powered. + +Memory cell diagram + +> You can think of a memory cell as a means of storing a single bit. + +### Storing single words + +In order to store multiple bits together (i.e. words), we will simply store a series of memory cells next to each other. We will need some column selecting I/O to handle selecting the individual bits of the word correctly. + +Memory cell word diagram + +### Storing multiple words + +Now that we have organised individual words, we want to store multiple words in memory. We can use this grid arrangement to arrange the words in parallel as follows (imagine we wanted to store four of the 4-bit words shown above): + +Memory cell words diagram + +In our **address decoder**, we have $$ log_{2} (W) $$ many control pins, where $$ W $$ is the number of words we want to store in memory. (This is because each pin can be high or low, and hence refer to two distinct words). + +**We want to maintain a square grid of cells.** We could simply have a 16-bit word, which we partition into four individual words (it is possible to put smaller words into the registers of larger ones). However, this would require 16 data lines on the column selection IO, with each bit requiring power; this would be rather lopsided and would result in a column selector doing all the work. Maintaining a square grid means that we can balance the number of required pins across two different pieces of IO, each with their own power requirements. + +> We are trying to avoid long, narrow arrays when we design our memory cell arrays. We want to **maximise space for memory cells** and minimise space taken up by IO. + +# Detecting and Correcting Errors + +> Although this topic is within the memory systems lecture, it is fundamental to error detection on the whole and hence has its own section here. + +Broadly speaking, there are two types of errors: +- Errors that occur **within a system**, e.g. in a memory system. +- Errors that occur in the **communication between systems**, e.g., in the transmission of messages or data between systems. This is what we will focus on. + +## Noise + +- We typically **send information through channels**- when these channels become affected by **unwanted information**, they become **noisy**. +- Noise will arise from the **physical properties** of devices: + - Thermal noise + - Noise of electronic components + - Noise of transmission circuits +- **Magnetic media** will also have a "classic form of noise" due to the "random alignment of magnetic fields". + +> Noise is **always present**. If it doesn't come from the components themselves, it'll come from external sources such as radiation. Noise is hence one of the **limiting factors** in computer systems. +> In magnetic stores, when we have **decreased area** to store a bit, **noise gets worse** which increases the likelihood of errors. + +### Digital logic devices + +> We choose binary systems for our number systems as it provides us a **high degree of noise immunity**. +> We also need to consider the **tolerances of the components we use** + +#### Illustrating noise immunity, a trademarked Akram Analogy™ + +_If you are comfortable with the idea of noise immunity and transistor-transistor logic voltage levels, you probably won't need to read this._ + +To illustrate the first point, consider the following thought experiment: +- You and your friend have found a massive tunnel (assuming CS students step outdoors). **The tunnel has water dripping and some other ambiguous sounds.** +- You both stand at either end of the tunnel, and you realise now you want to say something to your friend. You have two choices: + - You can choose to simply clap your hands to get their attention (a binary communication system), OR + - You can choose to say a magic password that only they will respond to (a base-26 communication system). + +Given the ambiguous sounds in the tunnel, which do you think your friend will be able to distinguish better? Would they be able to distinguish a clap above a specific volume? Or would they be able to distinguish the spoken magic password? How do you know when a sound is finally loud enough to constitute you communicating with one another? + +This idea of a small window where we do not consider a signal high or low is widest when we use a binary system- if we had any more possible values, we would need to find even more ranges which we consider 'nothing' (_**i.e. neither 0 nor anything else)**_. + +> Using binary means that we only focus on two logical values. + +In the image below, you can see the illustrated example for the above analogy, with annotated TTL voltage levels for context. + +Memory cell word diagram + +Memory cell word diagram + +There is a point at which **if there is too much noise**, i.e. a train suddenly passes through the tunnel, your clap will never be heard and is permanently lost- this is known as a **loss/ collapse of immunity**. + +## Detecting single errors +If we _assume_ that errors occur **at random** due to noise, one could naively ask you to clap three times and hope that your friend hears majority of them- i.e., you could send the message several times and take a vote. However, this is a very expensive affair (you would get tired quickly). + +We can make the further assumption that **if the probability of one error is low, the probability of two errors close together is even lower**. Using this knowledge, we can add a **parity bit** to the message which can **summarise the property of the message**. We can check that this property is intact to see whether the message has been altered; using a parity bit is typically **cheaper and adequate** in many situations. + +### Parity systems + +> There are many different types of parity systems, but the two main ones you should be focused on are the **even parity** and the **odd parity** system. +> Each system will add an extra bit to the message which makes the **number of logical 1's even or odd** depending on the system chosen. + +| Non-parity message (7 bits) | Even parity bit added | Odd parity bit added | +|-----------------------------|-----------------------|----------------------| +| `100 0001` | `0100 0001` (two `1`'s) | `1100 0001` (three `1`s) | + +It is possible to calculate the parity bit using hardware or software. + +#### Finite automaton to calculate parity + +The lecture slides contain a two-state finite automaton- this diagram shows how, for a message `110` travelling on an **even parity system**, we can use the automaton to reach a parity bit of `0`, so the message to be sent is `0110`. + +Memory cell word diagram + +#### Hardware to calculate parity + +You can calculate the parity bit for a message by **XORing each bit with one another**. You can achieve this by connecting each pair of bits to an XOR gate; for an odd number of input bits, add a `0` for an **even** parity system and a `1` for an **odd** parity system. + +## Detecting multiple errors + +> In the real world, it is more likely that **errors will appear in bursts**. + +Burst errors can be caused by number of reasons, including but not limited to network or communication dropouts for a few milliseconds. +In this scenario, there may be errors in multiple bits and single-bit parity will still hold. Therefore, we must move to **checksums** to check entire columns. + +### Bit-column parity +One way in which we can identify errors in multiple columns (i.e. multiple bits) is to use bit-column parity. + +Take the message, `Message`, which is made up of 7 7-bit ASCII characters: + +| Character | 7 Bits | +|-----------|--------| +| `M` | `100 1101` | +| `e` | `110 0101` | +| `s` | `111 0011` | +| `s` | `111 0011` | +| `a` | `111 0001` | +| `g` | `111 0111` | +| `e` | `111 0101` | + +
+By arranging each column into its own message, we can then calculate a parity bit for each message: + +| Column number | 7-bit column | Even parity bit | +|---------------|--------------|-----------------| +| 1 | `111 1111` | `1` | +| 2 | `011 1111` | `0` | +| 3 | `001 1000` | `0` | +| 4 | `100 0000` | `1` | +| 5 | `110 0011` | `0` | +| 6 | `001 1010` | `1` | +| 7 | `111 1111` | `1` | + +We can then take _this_ column and turn it into a 7-bit message: `1001011` spells out ASCII `K`. Now, we can add `K` to the end of our original message, and send the final message `MessageK`. + +> This system will detect all burst errors of less than 14 bits; it will fail if an even number of errors occur in a bit-column (i.e., a message equal to 8 characters). + +### Error Correcting Codes: row and column parity + +The above example only detects errors in columns- but it doesn't stop us from using row correction at the exact same time. If we have both row parity and column parity, then we begin by checking if each column has correct parity. If we find a column with incorrect parity, we immediately begin going through the rows, and checking the parity of each row. If we find a mistake in a row as well, we simply need to invert the bit found in the column with an error. This ECC enables us to detect multiple errors and fix single errors. diff --git a/cs132/part4res/4-1.png b/cs132/part4res/4-1.png new file mode 100644 index 00000000..09276c86 Binary files /dev/null and b/cs132/part4res/4-1.png differ diff --git a/cs132/part4res/4-2.png b/cs132/part4res/4-2.png new file mode 100644 index 00000000..1bf16a8f Binary files /dev/null and b/cs132/part4res/4-2.png differ diff --git a/cs132/part4res/4-3.png b/cs132/part4res/4-3.png new file mode 100644 index 00000000..01a18f7b Binary files /dev/null and b/cs132/part4res/4-3.png differ diff --git a/cs132/part4res/4-4.png b/cs132/part4res/4-4.png new file mode 100644 index 00000000..34fa912c Binary files /dev/null and b/cs132/part4res/4-4.png differ diff --git a/cs132/part4res/4-5.png b/cs132/part4res/4-5.png new file mode 100644 index 00000000..3d4f503f Binary files /dev/null and b/cs132/part4res/4-5.png differ diff --git a/cs132/part4res/4-6.png b/cs132/part4res/4-6.png new file mode 100644 index 00000000..5c45081d Binary files /dev/null and b/cs132/part4res/4-6.png differ diff --git a/cs132/part4res/4-7.png b/cs132/part4res/4-7.png new file mode 100644 index 00000000..929ca842 Binary files /dev/null and b/cs132/part4res/4-7.png differ diff --git a/cs132/part5.md b/cs132/part5.md new file mode 100644 index 00000000..407d0005 --- /dev/null +++ b/cs132/part5.md @@ -0,0 +1,110 @@ +--- +title: I/O Mechanisms +layout: CS132 +part: true +pre: part4 +nex: part6 +--- + +

+ There is no single I/O mechanism that is “better” than the others – it is important to understand the pros and cons of each mechanism and the situations where each should be used. +

+ +# Memory mapped I/O + +- Same address bus is used to address both memory and I/O devices. +- Memory addresses are associated with particular I/O devices – when we want to send data to an I/O device, we send it to that memory address; when we want to receive data, we just read from that memory address. +- Memory and registers on I/O devices are mapped to these address values. An I/O device can then operate as designed on the data in these addresses. + +*This means that all addressing modes supported by a CPU are available to I/O.* + +> **Advantages.** No need for dedicated instructions, or for additional hardware. Addressing modes supported by the CPU are available to I/O. +> +> **Disadvantages.** We are giving up portions of our memory to I/O devices. This is less of a concern for modern 64-bit processors with more address spaces, but is still relevant when sometimes you have no choice but to use a processor with constrained memory addresses like 16-bit legacy or embedded systems. + +# Polled I/O + +> **Synchronising I/O devices** with our CPUs is one of the biggest challenges associated with I/O systems, as most of our I/O devices operate at much slower speeds than the CPU. + +Our CPUs **operate much faster** than IO devices, so while IO devices are doing their thing, the CPU can do other things. + +**Busy-wait polling**, where your CPU is constantly reading and checking the status of a particular IO device (essentially waiting for it to be ready), is **very wasteful of CPU time and power**. Only time you not want to do this, is for **devoted systems** that want to keep checking for the output because it is **important** (e.g. temperature sensor in nuclear reactor). + +**Polling, interleaved with another task** is a bit better because you can do other tasks while waiting for your I/O devices to be ready. But even then, this can lead to significantly delayed responses to that device, because your **estimate for the the time-interval** between status checks can still be off and there are cases where your CPU is dominated by the so-called side tasks. + +> **Advantages.** Simple software and hardware involved; usually some kind of `loop` paired with some conditional checks and hardware support for the notion of “ready” is all that is required. +> +> **Disadvantages.** Busy-wait polling – waste of CPU time and power. If you have a power constrained device, this may not be good. Interleaving can still lead to significantly delayed responses to a particular I/O device – not a problem in most cases but is a serious issue if you’re working in a hard real-time context. + +# Handshaking + +**Handshaking** is another way of solving synchronisation problems with I/O devices. There are 3 kinds of handshaking: + +- **Unsynchronised** – where you just provide data to an I/O device for it to process. +- **Open-ended** – provide some data and assert its validity, after which the I/O device will handle the data. +- **Closed-loop** (the only true two-way communication) – data provided, asserted validity of data, recipient (I/O device) readiness. + +**Closed-loop** handshaking allows both parties to know the period/time-interval where effective data transfer can occur. This is when both data is valid and I/O device is ready to receive. This way the CPU can more accurately predict when data transfer can occur and when it should do other things. + +Handshaking can be implemented with software or specialised hardware, which often requires fewer CPU instructions. Hardware solutions are usually used in embedded systems when software is not available for you to use. + +# Interrupts + +Another way to target synchronisation problems. CPU normally executes instructions sequentially, unless a jump or branch is made – an interrupt input can force a CPU to jump to a service routine. The key difference between interrupts and handshaking or polling is that it is **asynchronous.** + +**Interrupt requests** may be ignored depending on the current task that the CPU is working on. The CPU compares the “priority” of the tasks and decides which tasks supersedes the other. **Non-maskable Interrupts** cannot be ignored and the CPU will have to service the interrupt. + +When an interrupt is serviced, typically the CPU will finish the current instruction it is working on and, save the state of the working registers and the program counter (usually saving this state on a **stack**). It will then process the interrupt service routine. Once complete, it will remove the program counter from the stack and start processing instructions again from where it left off. + +> Pushing and popping the PC and status registers onto the stack before and after servicing an interrupt is known as a **context switch** because we are changing the state to execute a different set of instructions. + +Maskable interrupts can be interrupted as well, provided that the **new** interrupt is of a higher priority than the current interrupt. This is why popping the PC and registers onto a stack is useful so we can keep track and sequentially process different set of instructions based on priority. + +## Interrupts for IO examples + +Some IO devices can generate interrupts themselves. + +A hard drive can generate an interrupt when data, requested some time earlier, is ready to be read. + +A timer can generate an interrupt every 100ms and the service routine can then read a sensor input. + +A printer can generate an interrupt when it is ready to receive the next character to print. + +> **Advantages.** The asynchronous nature of interrupts allow fast responses and no waste of CPU time/battery power – especially when the IO devices are asynchronous themselves. +> +> **Disadvantages.** But, all data transfers still controlled by CPU (DMA addresses this). Interrupts also make hardware and software more complex. + +# Direct Memory Access (DMA) + +Interrupts rely on the microprocessor (CPU) to do everything and this makes it the bottleneck for I/O if there are **large amounts of data** that must be transferred at high speed. + +DMA fixes this by giving control of the system buses from the CPU to the DMA Controller (DMAC). + +- The DMAC is a dedicated device that controls the three system buses during the data transfer. +- The DMAC is **optimised solely for data transfer**. + +> **Advantages.** When dealing with large amounts of data, DMA-based I/O can be up to 10 times faster than CPU-driven I/O. CPU is able to process other instructions that do not require the system buses while the DMAC oversees data transfer. +> +> **Disadvantages.** Additional hardware cost. + +## DMA Modes of Operation + +> **Cycle Stealing.** DMAC uses the system buses when they are not being used by the CPU – usually by using available memory access cycles not used by the CPU. This is less effective and less common then the next mode of operation. +> +> **Burst Mode.** DMAC acquires system buses for the transfer of large amount of data at high speed and preventing the CPU from using the system buses for a fixed time OR… +> +> - until the transfer is complete +> - the CPU receives an interrupt from a device of greater priority. + +This is usually the events that take place before the CPU surrenders control of the system buses: + +1. DMA transfer requested by I/O +2. DMAC passes request to CPU +3. CPU initialises DMAC + - Specifies if it is an **Input** or **Output** operation. + - Sets the **start address** for the data transfer to the DMAC Address Register. + - Sets the **number of words** to transfer to the DMAC Count Register. + - CPU enables DMAC to initiate the transfer. +4. DMAC requests use of system buses **depending** on its mode of operation. +5. CPU responds with DMA Ack when it’s ready to surrender buses. +