# Lambda calculus implementation in Scheme

Lambda calculus is a formal system for representing computation. As with most formal systems and mathematics, it relies heavily on substitution.

We will start by implementing a subst procedure that accepts an expression e, a source src and a destination dst which will replace all occurences of src with dst in e.

(define (subst e src dst)
(cond ((equal? e src) dst)
((pair? e) (cons (subst (car e) src dst)
(subst (cdr e) src dst)))
(else e)))


Trying it a couple of times:

> (subst '(lambda (x) x) 'x 'y)
'(lambda (y) y)
> (subst '(lambda (x) x) '(lambda (x) x) 'id)
'id


Next, based on this substitution we need to implement a beta-reduce procedure that, for a lambda expression $(\lambda x . t) s$ will reduce to $t[x := s]$, that is, $t$ with all $x$ within $t$ replaced to $s$.

Our procedure will consider 3 cases:

1. Lambda expression that accepts zero args – in which case we just return the body without any substitutions
2. Lambda expression that accepts a single argument – in which case we substitute every occurrence of that argument in the body with what’s passed to the expression and return the body
3. Lambda expression that accepts multiple arguments – in which case we substitute every occurrence of the first argument in the body with what’s passed to the expression and return a new lambda expression

Before implementing the beta reducer, we will implement a predicate lambda-expr? that returns true if the expression is a lambda expression, and false otherwise:

(define (lambda-expr? e)
(and (pair? e)
(equal? (car e) 'lambda)


Here’s the helper procedure which accepts a lambda expression e and a single argument x to pass to the expression:

(define (beta-reduce-helper e x)
(cond ((and (lambda-expr? e)
; lambda expr that accepts multiple args
(list 'lambda
((and (lambda-expr? e)
; lambda expr that accepts a single arg
((and (lambda-expr? e)
; lambda expr with zero args
(else e)))


Then, our procedure beta-reduce will accept variable number of arguments, and apply each one of them to beta-reduce-helper:

(define (beta-reduce l . xs)
(if (pair? xs)
(apply beta-reduce
(beta-reduce-helper l (car xs))
(cdr xs))
l))


Testing these with a few cases:

> (beta-reduce '(lambda (x y) x) 123)
'(lambda (y) 123)
> (beta-reduce '(lambda (x y) y) 123)
'(lambda (y) y)
> (beta-reduce '(lambda (x) (lambda (y) x)) 123)
'(lambda (y) 123)
> (beta-reduce '(lambda (x) (lambda (y) y)) 123)
'(lambda (y) y)


However, note this case:

> (beta-reduce '(lambda (n f x) (f (n f x))) '(lambda (f x) x))
'(lambda (f x) (f ((lambda (f x) x) f x)))


It seems that we can further apply beta reductions to simplify that expression. For that, we will implement lambda-eval that will recursively evaluate lambda expressions to simplify them:

(define (lambda-eval e)
(cond ((can-beta-reduce? e) (lambda-eval (apply beta-reduce e)))
((pair? e) (cons (lambda-eval (car e))
(lambda-eval (cdr e))))
(else e)))


But, what does it mean for an expression e to be beta reducible? The predicate is simply:

(define (can-beta-reduce? e)
(and (pair? e) (lambda-expr? (car e)) (pair? (cdr e))))


Great. Let’s try a few examples now:

> ; Church encoding: 1 = succ 0
> (lambda-eval '((lambda (n f x) (f (n f x))) (lambda (f x) x)))
'(lambda (f x) (f x))
> ; Church encoding: 2 = succ 1
> (lambda-eval '((lambda (n f x) (f (n f x))) (lambda (f x) (f x))))
'(lambda (f x) (f (f x)))
> ; Church encoding: 3 = succ 2
> (lambda-eval '((lambda (n f x) (f (n f x))) (lambda (f x) (f (f x)))))
'(lambda (f x) (f (f (f x))))


There’s our untyped lambda calculus 🙂

There are a couple of improvements that we can do, for example implement define within the system to define variables with values. Another neat addition would be to extend the system with a type checker.

EDIT: As noted by a reddit user, the substitution procedure is not considering free/bound variables. Here’s a gist that implements that as well.

# Closed-expression of a sum with proof in Idris

One well known fact is the sum $1 + 2 + \ldots + n = \frac {n(n + 1)} {2}$. Let’s try to prove this fact in Idris.

We start intuitively by defining our recursive sum function:

total sum : Nat -> Nat
sum Z     = Z
sum (S n) = (S n) + sum n


Testing it a few times:

Idris> sum 3
6 : Nat
Idris> sum 4
10 : Nat


Looks good.

Next, we will come up with out dependently typed function to prove the fact.

theorem_1_firsttry : (n : Nat) -> sum n = divNat (n * (n + 1)) 2
theorem_1_firsttry Z     = ?a
theorem_1_firsttry (S n) = ?b


The base case that we need to prove is of type 0 = divNat 0 2. Looks a bit tricky. Let’s try to use divNatNZ along with a proof that 2 is not zero:

theorem_1_secondtry : (n : Nat) -> sum n = divNatNZ (n * (n + 1)) 2 (SIsNotZ {x = 1})
theorem_1_secondtry Z     = ?a
theorem_1_secondtry (S n) = ?b


Now the base case is just Refl. Let’s put an inductive hypothesis as well:

theorem_1_secondtry : (n : Nat) -> sum n = divNatNZ (n * (n + 1)) 2 (SIsNotZ {x = 1})
theorem_1_secondtry Z     = Refl
theorem_1_secondtry (S n) = let IH = theorem_1_secondtry n in ?b


Idris tells us that we now need to prove:

b : S (plus n (sum n)) =
ifThenElse (lte (plus (plus n 1) (mult n (S (plus n 1)))) 0)
(Delay 0)
(Delay (S (Prelude.Nat.divNatNZ, div' (S (plus (plus n 1) (mult n (S (plus n 1)))))
1
SIsNotZ
(plus (plus n 1) (mult n (S (plus n 1))))
(minus (plus (plus n 1) (mult n (S (plus n 1)))) 1)
1)))


Woot.

Let’s take a slightly different route by doing a few algebraic tricks to get rid off division. Instead of proving that $1 + 2 + \ldots + n = \frac {n(n + 1)} {2}$, we will prove $2 * (1 + 2 + \ldots + n) = n(n + 1)$.

total theorem_1 : (n : Nat) -> 2 * sum n = n * (n + 1) -- sum n = n * (n + 1) / 2
theorem_1 Z     = Refl
theorem_1 (S n) = ?b


Now we need to show that b : S (plus (plus n (sum n)) (S (plus (plus n (sum n)) 0))) = S (plus (plus n 1) (mult n (S (plus n 1)))).

total theorem_1 : (n : Nat) -> 2 * sum n = n * (n + 1) -- sum n = n * (n + 1) / 2
theorem_1 Z     = Refl
theorem_1 (S n) = let IH = theorem_1 n in
rewrite (multRightSuccPlus n (plus n 1)) in
rewrite sym IH in
rewrite (plusZeroRightNeutral (sum n)) in
rewrite (plusZeroRightNeutral (plus n (sum n))) in
rewrite (plusAssociative n (sum n) (sum n)) in
rewrite (sym (plusSuccRightSucc (plus n (sum n)) (plus n (sum n)))) in
rewrite plusCommutative (plus n 1) (plus (plus n (sum n)) (sum n)) in
rewrite sym (plusSuccRightSucc n Z) in
rewrite plusZeroRightNeutral n in
rewrite (sym (plusSuccRightSucc (plus (plus n (sum n)) (sum n)) n)) in
rewrite (sym (plusAssociative (n + sum n) (sum n) n)) in
rewrite plusCommutative (sum n) n in Refl


Looks a bit big, but it works! With line 4 and 5 we get rid off multiplication and then all we need to do is some algebraic re-ordering of plus to show that both sides are equivalent.

Now that we proved it, you can use this fact in your favorite programming language 🙂

# Proving length of mapped and filtered lists in Idris

First, let’s start by implementing map' and filter' for lists:

total map' : (a -> b) -> List a -> List b
map' _ [] = []
map' f (x :: xs) = f x :: map' f xs

total filter' : (a -> Bool) -> List a -> List a
filter' p []      = []
filter' p (x::xs) with (p x)
filter' p (x::xs) | True  = x :: filter' p xs
filter' p (x::xs) | False = filter' p xs


Trying a few cases:

Idris> map' (\x => x + 1) [1, 2]
[2, 3] : List Integer
Idris> filter' (\x => x /= 2) [1, 2]
[1] : List Integer


Looks neat.

A valid question would be: What do we know about the length of a mapped and length of a filtered list?

Intuition says that the length of a mapped list will be the same as the length of that list, since the values of the elements might change but not the actual length (size) of the original list. Let’s prove this fact:

-- For any given list xs, and any function f, the length of xs is same as the length of xs mapped with f
total theorem_1 : (xs : List a) -> (f : a -> b) -> length xs = length (map' f xs)
theorem_1 [] _        = Refl
theorem_1 (x :: xs) f = let I_H = theorem_1 xs f in rewrite I_H in Refl


Easy peasy, just use induction.

Filtering is a bit trickier. The length of a filtered list can be less than or equal to the original list. The intuitive reasoning for this is as follows:

1. Maybe the filter will apply to some elements, in which case the length of the filtered list will be less than the length of the original list
2. Or, maybe the filter will not apply at all, in which case the length of the filtered list is the same as the length of the original list

Let’s prove it!

-- For any given list xs, and any filtering function f, the length of xs >= the length of xs filtered with f
total theorem_2 : (xs : List a) -> (f : a -> Bool) -> LTE (length (filter' f xs)) (length xs)
theorem_2 [] _        = LTEZero {right = 0}
theorem_2 (x :: xs) f with (f x)
theorem_2 (x :: xs) f | False = let I_H = theorem_2 xs f in let LTESuccR_I_H = lteSuccRight I_H in LTESuccR_I_H
theorem_2 (x :: xs) f | True  = let I_H = theorem_2 xs f in let LTESucc_I_H  = LTESucc I_H in LTESucc_I_H


I constructed this proof using holes. The base case was very simple, however, for the inductive step we needed to do something else. With the inductive step we consider two cases:

1. In the case the filter was applied (False), the I_H needs to match the target type LTE _ (S _)
2. In the case the filter was not applied (True), the I_H needs to match the target type LTE (S _) (S _)

Idris has built-in proofs for these, with the following types:

Idris> :t lteSuccRight
lteSuccRight : LTE n m -> LTE n (S m)
Idris> :t LTESucc
LTESucc : LTE left right -> LTE (S left) (S right)


So we just needed to use them to conclude the proof.

Bonus: The only reason I rewrote filter' was to use with which seems easier to rewrite to when proving stuff about it. The built-in filter uses ifThenElse and I haven’t found a way to rewrite goals that are using it. I rewrote map' just for consistency.

Bonus 2: Thanks to gallais@reddit for this hint. It seems that the same with (f x) used in the proof also makes the ifThenElse reduce.