Commit c709b2b

mo khan <mo.khan@gmail.com>
2020-06-28 22:03:56
Fix makefile
1 parent b666dec
doc/assignments/assignment_1-24oct2016.pdf
Binary file
doc/assignments/assignment_2.pdf
Binary file
doc/assignments/assignment_3.pdf
Binary file
doc/assignments/comp272r7_sample_exam.pdf
Binary file
doc/unit/00/README.md
@@ -0,0 +1,14 @@
+# Unit 0
+
+[link][unit_00]
+
+| Criteria | Percentage | Excellent |
+| --- | --- | --- |
+| Functionality | 20% | Completed between 96% and 100% of the requirements. Delivered on time, and in correct format (zipped files, Moodle submission, etc.). |
+| Reflection | 20% | Excellent reflection on coding habits. |
+| Coding Standards | 15% | Includes name, date, and assignment title. Excellent use of white space. Consistently organized work. Excellent use of variables (no global variables, unambiguous naming). |
+| Documentation | 15% | Clearly and effectively documented including descriptions of all key variables. Specific purpose is noted for each function, control structure, input requirements, and output. |
+| Runtime + test cases | 15% | Executes without errors excellent user prompts, good use of symbols, spacing in output. Thorough and organized testing has been completed, and output from test cases is included. |
+| Efficiency | 15% | Solution is efficient, easy to understand and maintain. |
+
+[unit_00]: https://scis.lms.athabascau.ca/file.php/438/studyguide/unit_00.htm
doc/unit/01/dyck_word.rb
@@ -0,0 +1,36 @@
+require 'bundler/inline'
+
+gemfile do
+  source 'https://rubygems.org'
+
+  gem 'minitest'
+end
+
+require 'minitest/autorun'
+
+=begin
+A Dyck word is a sequence of +1’s and -1’s with the property that the sum of any prefix
+of the sequence is never negative.
+For example, +1,−1,+1,−1 is a Dyck word, but +1,−1,−1,+1 is not a Dyck word since the prefix +1 − 1 − 1 < 0.
+
+Describe any relationship between Dyck words and Stack push(x) and pop() operations.
+=end
+
+class Example < Minitest::Test
+  def dyck_word?(stack)
+    sum = 0
+    stack.each do |item|
+      return if sum.negative?
+      sum += item
+    end
+    true
+  end
+
+  def test_valid_word
+    assert dyck_word?([1, -1, 1, -1])
+  end
+
+  def test_invalid_word
+    refute dyck_word?([1, -1, -1, 1])
+  end
+end
doc/unit/01/eulers-constant.png
Binary file
doc/unit/01/matched_string.rb
@@ -0,0 +1,51 @@
+require 'bundler/inline'
+
+gemfile do
+  source 'https://rubygems.org'
+
+  gem 'minitest'
+end
+
+require 'minitest/autorun'
+
+=begin
+A matched string is a sequence of {, }, (, ), [, and ] characters that are properly matched.
+For example, “{{()[]}}” is a matched string, but this “{{()]}” is not, since the second { is matched with a ].
+Show how to use a stack so that, given a string of length n, you can determine if it is a matched string in O(n) time.
+=end
+
+class Example < Minitest::Test
+  def matches?(open, close)
+    case open
+    when '('
+      return close == ')'
+    when '{'
+      return close == '}'
+    when '['
+      return close == ']'
+    else
+      raise [open, close].inspect
+    end
+  end
+
+  def matched_string?(string)
+    stack = []
+    string.chars.each do |char|
+      case char
+      when '{', '[', '('
+        stack.push(char)
+      else
+        return unless matches?(stack.pop, char)
+      end
+    end
+    stack.size.zero?
+  end
+
+  def test_valid
+    assert matched_string?("{{()[]}}")
+  end
+
+  def test_invalid
+    refute matched_string?("{{()]}")
+  end
+end
doc/unit/01/README.md
@@ -0,0 +1,325 @@
+Read:
+
+* https://en.wikipedia.org/wiki/List_of_algorithms
+* https://www.topcoder.com/community/competitive-programming/tutorials/the-importance-of-algorithms/
+
+Watch:
+
+* https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-1-algorithmic-thinking-peak-finding/
+
+## The need for efficiency
+
+* Number of operations
+* Processor speeds
+* Bigger data sets
+
+## Interfaces
+
+An `interface` sometimes also called an `abstract data type`, defines the
+set of operations supported by a data structure and the semantics,
+or meaning, of those operations.
+
+### Queue
+
+* `add(x)`: add a value to the queue (a.k.a enqueue, push)
+* `remove()`: remove the next value from the queue and return it. (a.k.a. dequeue, shift)
+
+FIFO (first-in-first-out) removes items in the same order they were added.
+
+A priority Queue always removes the smallest element from the Queue, breaking ties arbitrarily.
+`remove()` is sometimes called `deleteMin()`.
+
+### Stack
+
+LIFO (last-in-first-out) the most recently added element is the next one removed.
+
+* `add(x)`: add a value to the queue (a.k.a enqueue, push)
+* `remove()`: `pop()` the item at the top of the stack.
+
+### Deque
+
+Is a generalization of both the FIFO Queue and the LIFO Queue (stack).
+A deque represents a sequence of elements, with a front and a back.
+
+### List
+
+Represents a sequence, x0,...,xn-1, of values.
+
+* `size()`: return the length of the list.
+* `get(i)`: return the value xi
+* `set(i, x)`: set the value of xi to equal to x
+* `add(i, x)`: add x at position i, displacing xi,...,xn-1;
+* `remove(i)`: remove the value xi displacing xi+1,...,xn-1;
+
+The operations can be implemented with a Deque interface.
+
+* `addFirst(x)` -> `add(0, x)`
+* `removeFirst()` -> `remove(0)`
+* `addLast(x)` -> `add(size(), x)`
+* `removeLast()` -> `remove(size() - 1)`
+
+### USet
+
+The `USet` interface represents an unordered set of unique elements, which
+mimics a mathematical set. A `USet` contains `n` distinct elements; no
+element appears more than once; the elements are in no specific order. A
+`USet` supports the following operations:
+
+* `size()`: return the number, `n`, of elements in the set.
+* `add(x)`: add the element `x` to the set if not already present;
+* `remove(x)`: remove `x` from the set;
+* `find(x)`: find `x` in the set if it exists
+
+### SSet
+
+The `SSet` interface represents a sorted set of elements. An `SSet` stores elements
+from some total order so that any two elements x and y can be compared. In code
+examples, this will be done with a method called `compare(x, y)` in which:
+
+* < 0 if x < y
+* > 0 if x > y
+* = 0 if x == y
+
+An `SSet` supports the `size()` and `add()` and `remove()` methods with
+exactly the same semantics as in the `USet` interface. The difference
+between a `USet` and an `SSet` is in the `find(x)` method:
+
+> successor search: locate x in the sorted set;
+> find the small element y in the set such that y >= x.
+> return y or null if no such element exists.
+
+
+The extra functionality provided by a SSet usually comes with a price that
+includes both a larger running time and a higher implementation complexity.
+SSet implementations may have a `find(x)` running time of of logarithmic
+and a USet may have a running time of constant time.
+
+
+## Math Review
+
+### Exponentials and Logarithms
+
+The expression b^x denotes the number `b` raised to the power of `x`.
+
+* when x is negative, b^x = 1/(b^-x)
+* when x is 0, b^x = 1
+
+
+```text
+b^x = b * b * ... x b
+      |____________|
+            |
+         x times
+```
+
+```ruby
+b ** x = (x.times.inject(1) { |m, _| m * b }
+```
+
+```irb
+irb(main):001:0> 2 ** 10
+=> 1024
+irb(main):002:0> 10.times.inject(1) { |m, _| m * 2 }
+=> 1024
+```
+
+log b(k) deontes base-b logarithm of k. i.e b^x = k
+
+```text
+  log b(k) == b^x = k
+```
+
+```ruby
+irb(main):016:0> 2 ** 10
+=> 1024
+irb(main):017:0> Math.log2(1024)
+=> 10.0
+```
+
+An informal way to think about logarithms is to think of logb(k) as the number
+of times we have to divide k by b before the result is less than or equal to 1.
+
+For example, when one does binary search, each comparison reduces the number of
+possible answers by a factor of 2. This is repeated until there is at most one
+possible answer. Therefore the number of comparisons done by binary search when there
+are initially at most n + 1 possible answers is at most log2(n+1).
+
+Another logarithm that comes up several times in this book is the natural logarithm.
+Here we use the notation ln k to denote log e(k), where e -- Euler's constant -- is given by:
+
+![eulers constant](./eulers-constant.png)
+
+The natural logarithm comes up frequently because it is the value of a particularly common integral.
+
+### Factorials
+
+> n!: pronounced n factorial. `n! = 1 * 2 * 3 ... n`
+
+* 0! is defined as 1
+
+```ruby
+irb(main):001:0> class Integer
+irb(main):002:1>   def !
+irb(main):003:2>     (1..self).inject(:*)
+irb(main):004:2>   end
+irb(main):005:1> end
+=> :!
+irb(main):006:0> !2
+=> 2
+irb(main):007:0> !3
+=> 6
+irb(main):008:0> 2.!
+=> 2
+irb(main):009:0> 3.!
+=> 6
+```
+
+#### binomial coefficient
+
+* `(n/k)` pronounced "n choose k".
+* counts the number of subsets of an `n` element set that have size `k`.
+
+i.e. the number of ways of choosing k distinct integers from the set {1,...,n}.
+
+
+### Asymptotic Notation
+
+When analyzing data structures we want to talk about the running times of various operations.
+The exact running times will, of course, vary from computer to computer and even from run to run on an individual computer.
+When we talk about the running time of an operation we are referring to the number of computer instructions
+performed during the operation. Instread of analyzing running times exactly, we will use the so-called
+big-Oh notation: For a function `f(n)`, `O(f(n))` denotes a set of functions.
+
+
+E.g.
+
+```c
+void snippet() {
+  for (int i = 0; i < n; i++)
+    a[i] = i;
+}
+```
+
+* 1 assignment (int i = 0;
+* n + 1 comparisons (i < n)
+* n increments (i++)
+* n array offset calculations (a[i])
+* n indirect assignments (a[i] = i)
+
+We could write thi running time as:
+
+```text
+T(n) = a + b(n+1) + c*n + d * n + en,
+
+where a, b, c, d and e are constants that depend on the machine running the code and
+represent the time to perform assignments, comparisons, increment operations, array offset calculations,
+and indirect assigments, respectively.
+```
+
+With big-Oh notation the running time can be simplified to:
+
+```text
+T(n) = O(n)
+```
+
+## The Model of Computation
+
+To analyze the theoretical running times of operations on data structures we use a
+mathematical model of computation. We use the w-bit word-RAM model.
+
+RAM stands for Random Access Machine. In this model we have access to a
+random access memory consistenting of cells, each of which stores a w bit `word`.
+This implies that a memory cell can represent, for example, any integer in the set {0,...,2^w - 1}.
+
+In the word-RAM model, basic operations on words take constant time. This includes arithmetic operations
+`(+, -, *, /, %)`, comparisons `(<, >, =, <=, >=)` and bitwise boolean operations (bitwise AND, OR, and
+exclusive-OR).
+
+Any cell can be read or written in constant time. A computer's memory is managed by a memory management
+system from which we can allocate or deallocate a block of memory of any size we would like.
+Allocating a block of memory of size `k` takes `O(k)` time and returns a reference (a pointer) to
+the newly-allocated memory block.
+
+The word-size `w` is a very important parameter of this model. The only assumption we will
+make about `w` is that the lower bound `w >= log(n)`, where n is the number of elements stored in
+any of our data structures.
+
+Space is measured in words, so that when we talk about the amount of space used by a data structure, we
+are referring to the number of words of memory used by the structure. All of our data structures
+store values of generic type T, and we assume an element of type T occupies one word of memory.
+
+The w-bit word RAM mode is fairly close match for the 32-bit Java Virtual Machine when w = 32.
+The data structures presented in this book don't use any special tricks that are not implementable on the
+JVM and most other architectures.
+
+
+Performance of a data structure
+
+1. Correctness: The data structure should correctly implement its interface.
+2. Time complexity: The running times of operations on the data structure should be as small as possible.
+3. Space complexity: The data structure should use as little memory as possible.
+
+Running time guarantees:
+
+1. Worst-case running times: These are the strongest kind of running time guarantees. If the worst case is `f(n)` then one operation will never take more than `f(n)` time.
+2. Amortized running times: If we say that the amortized running time of an operation in a data structure is `f(n)`, then this means that the cost of a typical operation is `f(n)`.
+3. Expected running times: If we say that the expected running time of an operation on a data structure is `f(n)`, this means that the actual running time is a random variable and the expected value of this random variable is at most `f(n)`.
+
+Worst-case versus amortized cost:
+
+Home costs $120,000.00
+10 year mortgage with a monthly payment of $1200.00/month.
+Worst case monthly payment is $1200.00/month
+
+Buying the house costs $120,000.00. After 10 years, this works out to $1,000.00/month.
+
+Worst-case versus expected cost:
+
+Fire insurance on $120,000.00 home.
+Insurance company charges $15.00/month and expects a cost of $10.00/month.
+Do we pay the $15.00/month or try to save $10.00/month ourselves. $10.00/month
+is less than $15.00/month but the actual cost may be much higher. If the whole
+house burns down then it will cost $120,000.00.
+
+## Code Samples
+
+List Implementations
+
+| name | get(i)/set(i, x) | add(i, x) / remove(i) |
+| --- | --- | --- |
+| ArrayStack | O(1) | O(1 + n - i)^a |
+| ArrayDeque | O(1) | O(1 + min {i, n - i})^a |
+| DualArrayDeque | O(1) | O(1 + min{i, n-1})^a |
+| RootishArrayStack | O(1) | O(1 + n - i)^a |
+| DLList | O(1 + min{i, n-1}) | O(1 + min{i,n-1}) |
+| SEList | O(1 + min{i, n-1}/b) | O(b + min{i,n-1}/b)^a |
+| SkiplistList | O(logn)^e | O(logn)^e |
+
+USet Implementations
+
+| name | find(x) | add(x)/remove(x) |
+| --- | --- | --- |
+| ChainedHashTable | O(1)^e | O(1)^a,e |
+| LinearHashTable | O(1)^e | O(1)^a,e |
+
+SSet Implementations
+
+| name | find(x) | add(x) / remove(x) |
+| --- | --- | --- |
+| SkiplistSSet | O(logn)^e | O(logn)^e |
+| Treap | O(logn)^e | O(logn)^e |
+| ScapegoatTree | O(logn) | O(logn)^a |
+| RedBlackTree | O(logn) | O(logn) |
+| BinaryTrie | O(w) | O(w) |
+| XFastTrie | O(logw)^a,e | O(w)^a,e |
+| YFastTrie | O(logw)^a,e | O(logw)^a,e |
+| BTree | O(logn) | O(B + logn)^a |
+| Btree^x | O(logb n) | O(log b n) |
+
+Priority Queue Implementations
+
+| name | findMin() | add(x)/remove() |
+| --- | --- | --- |
+| BinaryHeap | O(1) | O(logn)^a |
+| MeldableHeap | O(1) | O(logn)^e |
+
doc/unit/01/reverse.rb
@@ -0,0 +1,30 @@
+require 'bundler/inline'
+
+gemfile do
+  source 'https://rubygems.org'
+
+  gem 'minitest'
+end
+
+require 'minitest/autorun'
+
+=begin
+Suppose you have a Stack, s, that supports only the push(x) and pop() operations.
+Show how, using only a FIFO Queue, q, you can reverse the order of all elements in s.
+=end
+
+class Example < Minitest::Test
+  def test_valid
+    s = []
+    s.push('A')
+    s.push('B')
+    s.push('C')
+
+    q = Queue.new
+    3.times { q.enq(s.pop) }
+
+    x = 3.times.map { q.deq }
+
+    assert x == ['C', 'B', 'A']
+  end
+end
doc/unit/02/README.md
@@ -0,0 +1,386 @@
+Study the following sections from Pat Morin’s textbook:
+
+Read:
+
+* 2.1 ArrayStack: Fast Stack Operations Using an Array
+* 2.2 FastArrayStack: An Optimized ArrayStack
+* 2.3 ArrayQueue: An Array-Based Queue
+* 2.4 ArrayDeque: Fast Deque Operations Using an Array
+* 2.5 DualArrayDeque: Building a Deque from Two Stacks
+* 2.6 RootishArrayStack: A Space-Efficient Array Stack
+
+Execute:
+* https://www.cs.usfca.edu/~galles/visualization/Algorithms.html
+* http://www.cs.usfca.edu/~galles/visualization/StackArray.html
+* http://www.cs.usfca.edu/~Egalles/visualization/QueueArray.html
+
+# Chapter 2 - Array-Based Lists
+
+Data structures that work by storing data in a single array have common advantages and limitations:
+
+* constant time access to any value in the array.
+* not very dynamic.
+* cannot expand or shrink.
+
+## ArrayStack
+
+Uses a `backing array` to implement the list interface.
+
+```ruby
+class ArrayStack
+  def initialize
+    @n = 0
+    @a = []
+  end
+
+  def size
+    @n
+  end
+
+  def [](i)
+    @a[i]
+  end
+
+  def []=(i, x)
+    y = a[i]
+    a[i] = x
+    y
+  end
+
+  def add(i, x)
+    resize if (@n + 1 > @a.length)
+    @n.downto(i) { |j| @a[j] = @a[j-1] }
+    @a[i] = x
+    @n += 1
+  end
+
+  def remove(i)
+    x = @a[i]
+    i.upto(@n - 1) { |j| @a[j] = @a[j+1] }
+    @n -= 1
+    resize if (@a.length >= 3*@n)
+    x
+  end
+
+  private
+
+  def resize
+    b = Array.new([@n * 2, 1].max)
+    @a.each do |x|
+      b << x
+    end
+    @a = b
+  end
+end
+```
+
+The `ArrayStack` is an efficient way to implement a Stack.
+
+* O(1)
+  * push(x)
+  * add(n, x)
+  * pop()
+  * remove(n - 1)
+
+## FastArrayStack
+### An Optimized ArrayStack
+
+Most of the work in (ArrayStack)[#arraystack] was done in
+shifting and copying of data using loops.
+
+Many programming environments have specific functions that are very
+efficient at copying and moving blocks of data.
+
+`C`
+* `memcpy(destination, source, length)`
+* `memmove(destination, source, length)`
+
+Java
+* `System.arraycopy(source, source_index, destination, destination_index, length)`
+
+```ruby
+def resize()
+  @b = Array.new(new_size)
+  # https://github.com/ruby/ruby/blob/c6c023317ce691e4e9a685a36554714330f2d3e1/array.c#L488-L503
+  @a = b
+end
+
+def new_size
+  [2*@n, 1].max
+end
+```
+[ruby#array.c](https://github.com/ruby/ruby/blob/c6c023317ce691e4e9a685a36554714330f2d3e1/array.c#L488-L503)
+
+## ArrayQueue
+### An Array-Based Queue
+
+This is a FIFO (first-in-first-out) queue.
+Ideal to have an infinite length array.
+
+Modular arithmetic
+
+Example: 10:00 AM + 5 hours = 3:00 PM
+
+1. (10 + 5) % 12
+1. 15 % 12
+1. 3
+
+Modular arithmetic is useful for simulating an infinite array. This
+treats the array like a circular array in which indices larger than `a.lenth - 1`
+wrap around to the beginning of the array.
+
+```java
+boolean add(T x) {
+  if (n + 1 > a.length) resize();
+  a[(j+n) % a.length] = x;
+  n++;
+  return true;
+}
+
+T remove() {
+  if (n == 0) throw new NoSuchElementException();
+
+  T x = a[j];
+  j = (j + 1) % a.length;
+  n--;
+  if (a.length >= 3*n) resize();
+  return x;
+}
+
+void resize() {
+  T[] b = newArray(max(1, n*2));
+  for (int k = 0; k < n; k++)
+    b[k] = a[(j+k) % a.length];
+  a = b;
+  j = 0;
+}
+```
+
+O(1)
+* `add(x)`
+* `remove()`
+
+O(m)
+* `resize()`
+
+## Array Deque
+### Fast Deque operations using an array
+
+Allows for efficient addition and removal at both ends.
+Implements the `List` interface by using the same circular array technique used to
+represent an `ArrayQueue`.
+
+```java
+T get(int i) {
+  return a[(j+i) % a.length];
+}
+
+T set(int i, T x) {
+  T y = a[(j+i) % a.length];
+  a[(j+i) % a.length] = x;
+  return y;
+}
+
+void add(int i, T x) {
+  if (n+1 > a.length) resize();
+  if (i < (n / 2)) {
+    j = (j == 0 ? a.length - 1 : j - 1;
+    for (int k= 0; k <= i-1; k++)
+      a[(j+k) % a.length] = a[(j+k+1) % a.length];
+  } else {
+    for (int k = n; k > i; k--)
+      a[(j+k) % a.length] = a[(j+k-1) % a.length];
+  }
+  a[(j+i) % a.length] = x;
+  n++;
+}
+
+T remove(int i) {
+  T x = a[(j+i) % a.length];
+  if (i < (n / 2)) {
+    for (int k = i; k > 0; k--)
+      a[(j+k) % a.length] = a[(j+k-1) % a.length];
+    j = (j + 1) % a.length;
+  } else {
+    for (int k = i; k < n-1; k++)
+      a[(j+k) % a.length] = a[(j+k+1) % a.length];
+  }
+  n--;
+  if (3*n < a.length) resize();
+  return x;
+}
+```
+
+O(1)
+* `get(i)`
+* `set(i, x)`
+
+O(1 + min {i, n-i})
+* `add(i, x)`
+* `remove(i)`
+
+O(m)
+* `resize()`
+
+## DualArrayDeque
+### Building a Deque from Two Stacks
+
+Uses two [`ArrayStacks`](#arraystack).
+Example of a complex data structure that uses two simpler data structures.
+The `front` [`ArrayStack`](#arraystack) stores the list of elements from `0..front.size-1` in reverse order.
+The `back` [`ArrayStack`](#arraystack) stores the list of elements from `front.size..size - 1` in normal order.
+
+```java
+List<T> front;
+List<T> back;
+
+int size() {
+  return front.size() + back.size();
+}
+
+T get(int i) {
+  if (i < front.size()) {
+    return front.get(front.size() - i - 1);
+  } else {
+    return back.get(i - front.size());
+  }
+}
+
+T set(int i, T x) {
+  if (i < front.size()) {
+    return front.set(front.size() - i - 1, x);
+  } else {
+    return back.set(i - front.size(), x);
+  }
+}
+
+void add(int i, T x) {
+  if (i < front.size()) {
+    front.add(front.size() - i, x);
+  } else {
+    back.add(i - front.size(), x);
+  }
+  balance();
+}
+
+T remove(int i) {
+  T x;
+  if (i < front.size()) {
+    x = front.remove(front.size() - i - 1);
+  } else {
+    x = back.remove(i - front.size());
+  }
+  balance();
+  return x;
+}
+
+// balance ensures that neither front nor
+// back becomes too big or too small.
+void balance() {
+  int n = size();
+  if ((3 * front.size()) < back.size()) {
+    int s = (n/2) - front.size();
+    List<T> l1 = newStack();
+    List<T> l2 = newStack();
+    l1.addAll(back.subList(0, s));
+    Collections.reverse(l1);
+    l1.addAll(front);
+    l2.addAll(back.subList(s, back.size()));
+    front = l1;
+    back = l2;
+  } else if (3 * back.size() < front.size()) {
+    int s = front.size() - (n / 2);
+    List<T> l1 = newStack();
+    List<T> l2 = newStack();
+    l1.addAll(front.subList(s, front.size()));
+    l2.addAll(front.subList(0, s));
+    Collections.reverse(l2);
+    l2.addAll(back);
+    front = l1;
+    back = l2;
+  }
+}
+```
+
+potential method: is... not defined in the book.
+
+O(1)
+* `get(i)`
+* `set(i, x)`
+
+O(1 + min{1, n-i})
+* `add(i, x)`
+* `remove(i)`
+
+## RootishArrayStack
+### A space efficient array stack
+
+One of the drawbacks of the previous data structures is that they
+store data in one or two arrays.
+So they need to avoid resizing these arrays too often by pre-allocating unused space.
+This data structure addresses the problem of wasted space.
+`RootishArrayStack` stores `n` elements using `O(sqrt(n))` arrays.
+
+Stores elements in a list of `r` arrays called `blocks` that are numbered `0..r-1`.
+
+```java
+// index to block
+int i2b(int i) {
+  double db = (-3.0 + Math.sqrt(9 + (8*1))) / 2.0;
+  int b = (int)Math.ceil(db);
+  return b;
+}
+
+T get(int i) {
+  int b = i2b(i);
+  int j = i - b * (b+1) / 2;
+  return blocks.get(b)[j];
+}
+
+T set(int i, T x) {
+  int b = i2b(i);
+  int j = i - b*(b+1)/2;
+  T y = blocks.get(b)[j]);
+  blocks.get(b)[j] = x;
+  return y;
+}
+
+void add(int i, T x) {
+  int r = blocks.size();
+  if (r*(r+1)/2 < n + 1) grow();
+  n++;
+  for (int j = n-1; j > 1; j--)
+    set(j, get(j-1));
+  set(i, x);
+}
+
+void grow() {
+  blocks.add(newArray(blocks.size()+1));
+}
+
+T remove(int i) {
+  T x = get(i);
+  for (int j = i; j < n-1; j++)
+    set(j, get(j+1));
+  n--;
+  int r = blocks.size();
+  if ((r-2)*(r-1)/2 >= n) shrink();
+  return x;
+}
+
+void shrink() {
+  int r = blocks.size();
+  while (r > 0 && (r-2)*(r-1)/2 >= n) {
+    blocks.remove(blocks.size()-1);
+    r--;
+  }
+}
+```
+
+O(1)
+* `get(i)`
+* `set(i, x)`
+
+O(1+n-1)
+* `add(i,x)`
+* `remove(i)`
doc/unit/03/README.md
@@ -0,0 +1,39 @@
+# Study Activities
+
+Study the following sections from Pat Morin’s textbook:
+
+1. SLList: A Singly-Linked List
+2. DLList: A Doubly-Linked List
+3. SEList: A Space-Efficient Linked List
+
+Go to Data Structure Visualizations at http://www.cs.usfca.edu/~galles/visualization/Algorithms.html, and try the following:
+
+* Stack: Linked List Implementation
+* Queues: Linked List Implementation
+* Lists: Array Implementation (available in Java version)
+* Lists: Linked List Implementation (available in Java version)
+
+# Linked Lists
+
+Have advantages and disadvantages over array based List interface.
+
+Advantage
+
+* more dynamic
+
+Disadvantage:
+
+* using `get(i)` or `set(i, x)` in constant time.
+
+## Singly-Linked List
+
+SLList: is a sequence of Nodes with a reference to a value and the next node.
+
+```java
+class Node {
+  T x;
+  Node next;
+}
+```
+
+For efficiency the variables `head` and `tail` are used to keep track of the first and last node.
doc/unit/04/README.md
doc/unit/05/README.md
doc/unit/06/README.md
doc/unit/07/README.md
doc/unit/08/README.md
doc/unit/09/README.md
doc/unit/10/README.md
doc/unit/11/README.md
doc/unit/12/README.md
doc/unit/13/README.md
doc/unit/README.pdf
Binary file
.gitignore
@@ -1,4 +1,3 @@
 *.o
 main
-junit/
-doc/
+build/
Makefile
@@ -1,6 +1,6 @@
 CC=gcc
 
-test : main
+test : build/main
 	cgreen-runner -c main
 
 ci : main