# Why Dependent Types matter

tl;dr: With respect to expressive power in logic, Dependent Types matter because they allow us to express universal and existential quantification.

Most programming languages (Python, JavaScript) have the power to express propositional logic. For example, you can express AND, NOT, OR in Python, JavaScript or any other mainstream programming language. See my previous post Hierarchy of logical systems for more details on various logical systems.

However, if we are interested in mathematical proofs (and software correctness built upon those), then we need to work with higher order logic where we have the notion of universal (forall) and existential (exists) quantificators.

So, what are Dependent Types? Wikipedia formally defines them as:

Loosely speaking, dependent types are similar to the type of an indexed family of sets. More formally, given a type `A:U` in a universe of types `U`, one may have a family of types `B:A->U`, which assigns to each term `a:A` a type `B(a):U`. We say that the type `B(a)` varies with `a`.

This definition allows us to map (transform) terms to types, thus bringing closer the notion of values and types.

Let’s have a look at an Idris example of dependent types:

```f : Nat -> Type
f Z = Bool
f x = String
```

As a result:

```Idris> f 0
Bool : Type
Idris> f 1
String : Type
Idris> f 2
String : Type
```

Python example of “dependent types”:

```def f(x):
if x == 0:
return bool
else:
return str
```

As a result:

```>>> f(0)
<type 'bool'>
>>> f(1)
<type 'str'>
>>> f(2)
<type 'str'>
```

So we seem to be able to encode those in Python as well. But, there is no compile-step in Python to let us know that the types flow correctly.

So we can’t really take advantage of having the programming language do all the necessary logical checks for us in order to form a notion of mathematical proof.

Where Idris really shines is in the following example:

```f : {a : Nat} -> GT a 0 -> String
f {a} proof_that_a_gt_zero = "It is type safe that " ++ (show a) ++ " is greater than zero!"

produce_necessary_proof : (a : Nat) -> LTE 1 (S a)
produce_necessary_proof a = (LTESucc {left=Z} {right=a}) (LTEZero {right=a})

safe : Nat -> String
safe Z     = "Whoops!"
safe (S x) = f (produce_necessary_proof x)
```

Let’s try to understand this example by breaking it up.

1. `LTESucc` and `LTEZero` is a constructor for the data type `LTE`. A few examples for `LTEZero`:

```LTEZero : LTE 0 0
Idris> LTEZero {right = 1}
LTEZero : LTE 0 1
Idris> LTEZero {right = 2}
LTEZero : LTE 0 2
```

`LTESucc` allows us to say that if we have a proof of `LTE a b` then we can construct a proof of `LTE (a + 1) (b + 1)`:

```Idris> :t LTESucc
LTESucc : LTE left right -> LTE (S left) (S right)
```

`GT a 0` is just a friendly wrapper around `LTE 1 a`.

2. Let’s try to explain what `f`, `produce_necessary_proof` and `safe` do.
So `f` is a function that takes any natural number greater than zero, and a proof that it is greater than zero, and outputs a string for it.

It’s like saying, mathematically, that forall x in N, x > 0 -> “It is type safe that {x} is greater than zero!”. We can see how close these relationships are represented.

So to produce the proof that a natural number is greater than zero (`LTE 1 x`), we implement `produce_necessary_proof`. Examples:

```Idris> produce_necessary_proof 0
LTESucc LTEZero : LTE 1 1
Idris> produce_necessary_proof 1
LTESucc LTEZero : LTE 1 2
Idris> produce_necessary_proof 2
LTESucc LTEZero : LTE 1 3
```

`safe` then just combines `f` with `produce_necessary_proof`.

3. Compile-time checks that `f` only accepts a natural number greater than zero. While this is a purely mathematical example, we can also do industry examples…

Now, given all of the above, we can’t really do that in Python (or most mainstream languages) and have the compiler do all the checks for us and produce proofs for them ðŸ™‚

# Finding nth term in a sequence

I found this to be an interesting task from a regional math contest.

Given:
$a_0 = 6$
$a_k = 1/2 a_{k-1}$, $a_{k-1}$ even
$a_k = 3 a_{k-1} + 1$, $a_{k-1}$ odd
Find $a_{2018}$.

Starting with a_6, the sequence repeats with $(4, 2, 1)$ for numbers of the form $(3k, 3k + 1, 3k + 2)$ (see proof below). 2018 is of the form 3k + 2, thus the answer is 1.

Checking it with Idris:

```testseq : Integer -> Integer
testseq 0     = 6
testseq k     = let previous = testseq (k - 1) in
if (mod previous 2 == 0) then
(div previous 2) else
3 * previous + 1

infiniteSeq : Stream Nat
infiniteSeq = map (\x => fromIntegerNat (testseq x)) [0..]

-- *test> take 20 infiniteSeq
-- [6, 3, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1, 4, 2, 1, 4, 2, 1, 4, 2] : List Integer

getItemFromSeq : Nat -> Nat
getItemFromSeq n = index n infiniteSeq

-- *test> getItemFromSeq 2018
-- 1 : Integer
```

Proof for repeating sequence. Given:
$a_0 = 4$
$a_k = 1/2 a_{k-1}$, $a_{k-1}$ even
$a_k = 3 a_{k-1} + 1$, $a_{k-1}$ odd

Prove that the sequence repeats $(4, 2, 1)$, i.e. for any triplet $(a_k, a_{k+1}, a_{k+2})$ we have that it equals to some of $\{ (4, 2, 1), (1, 4, 2), (2, 1, 4) \}$.

Base case: $(a_0, a_1, a_2) = (4, 2, 1)$

Inductive step:
Case 1: Assume that $(a_k, a_{k+1}, a_{k+2}) = (4, 2, 1)$
Since $a_{k+2} = 1$, by definition we have that $a_{k+3} = 4$.
Thus $(a_{k+1}, a_{k+2}, a_{k+3}) = (2, 1, 4)$, which is what we needed to show.

Similarly cases 2 and 3 are proven.

Thus the sequence repeats $(4, 2, 1)$.

# Proofs with Idris

First, let’s try to understand what Idris’s Refl does. It is defined as such:

```data (=) : a -> b -> Type where
Refl : x = x
```

Now, let’s try to implement nats ourselves:

```data MyNat = Zero | Next MyNat

my_add : MyNat -> MyNat -> MyNat
```

What Idris does here at compile time, when it sees Refl it tries to evaluate the values by the given definitions. So it understands that (0 + 1) = 1, by my_add and definition of MyNat. E.g.:

```my_proof : my_add Zero (Next Zero) = Next Zero
my_proof = Refl
```

Now let’s try to write functions (both head and tail recursive) that will convert a list to a vector so we have the length of the list at type level.

```data Vect : Nat -> Type -> Type where
VNil  : Vect Z a
VCons : a -> Vect k a -> Vect (S k) a
```

We will prove watwat with the following type. We’ll see how and why we’ll be using it later.

```watwat : (left : Nat) -> (right : Nat) -> S (plus left right) = plus left (S right)
watwat Z right = Refl -- to prove S (plus 0 right) = plus 0 (S right) it's just reflexivity, by def of plus
```

Now for the other case (first argument not zero), if we write a hole:

```watwat (S left) right = ?hole
```

and check what we need to fill in: `hole : S (S (plus left right)) = S (plus left (S right))`

So in theory, we need to pass a lambda function which will take values from the previous type and produce values to match the new type.

That is, what we need to show here is that we can go from
`S (plus left right) = plus left (S right)`
to
`S (plus (S left) right) = plus (S left) (S right) [original definition]`,
i.e. `S (S (plus left right)) = S (plus left (S right)) [simplified by Idris when we checked the hole]`.

So now, it’d be good if we had a function with type `(a = b) -> f a = f b`. So we could’ve just used it with `(watwat left right)` to get the correct match. Core already has `cong` and it is exactly what we need.

```watwat (S left) right = cong (watwat left right)
```

We proved it using lambda terms. To prove it with tactics, equivalently:

```watwat (S left) right =
let inductiveHypothesis = watwat left right in
rewrite inductiveHypothesis in Refl
-- Or, even simpler, just watwat = plusSuccRightSucc
```

Now let’s go ahead and implement a function that converts a list to a vector (the vector will know the length at type level):

```-- The following head recursion works fine, because types flow nicely
f : (l : List Char) -> Vect (length l) Char
f [] = VNil
f (x :: xs) = VCons x (f xs)
```

Now let’s implement the tail recursion of the same computation

```f_tail : Vect n Char -> (l : List Char) -> Vect (length l + n) Char
f_tail v [] = v
```

The following won’t work because types won’t match:

```f_tail {n} v (x :: xs) = f_tail (VCons x v) xs
```

If, instead we use a hole:

```f_tail {n} v (x :: xs) = ?hole
```

We get that `hole : Vect (S (plus (length xs) n)) Char`

So we need to go from
`Vect (length xs + S n) Char (Type of f_tail (VCons x v) xs)`
to
`Vect (S (plus (length xs) n)) Char (Expected type)`.

We can use our watwat theorem to get to it.

```f_tail {n} v (x :: xs) =
let inductiveHypothesis = watwat (length xs) n in
rewrite inductiveHypothesis in (f_tail (VCons x v) xs)
```

To implement this in lambda terms, we first introduce ourselves to `replace` and `sym`:

```*induction> :t replace
replace : (x = y) -> P x -> P y
*induction> :t sym
sym : (left = right) -> right = left
*induction>
```

Example usage:

```f : 3 = (4 - 1)
f = Refl
```

Now note how Idris evaluates the types:

```*test> :t replace
replace : (x = y) -> P x -> P y
*test> :t replace {f = \x => Vect x Char} f
cong f : Vect 3 Char = Vect 3 Char
```

So proceeding with our implementation in lambda terms:

```f_tail {n} v (x :: xs) =
f_tail {n} v (x :: xs) = replace {P = \m => Vect m Char} (sym (watwat (length xs) n)) (f_tail_l (VCons x v) xs)
```

To recap what we did, by Curry-Howard isomorphism (proof by lambda construction) we proved the following theorems:

```(left : Nat) -> (right : Nat) -> S (plus left right) = plus left (S right)
(l : List Char) -> Vect (length l) Char
Vect n Char -> (l : List Char) -> Vect (length l + n) Char
```

Bonus:
We can use `:set showimplicits` to see the implicit arguments, so that we can pass stuff to them.

Given:

```f : 3 = 3
f = Refl
```

Here’s an example of `cong` which doubles an equality:

```double : (x = x) -> (x + x = x + x)
double n = cong {f = \x => x + x} n
```

And another example of `replace` which replaces the type:

```test_replace : {n : Type} -> (x = x) -> n -> n
test_replace f = replace {P = \x => n} f
```

Results in:

```*test> double f
Refl : 6 = 6
*test> test_replace {n = Bool} f
replace Refl : Bool -> Bool
```