Dependently Typed Lambda Calculus in Haskell

I found this awesome paper “A tutorial implementation of a dependently typed lambda calculus“, by Löh, A., C. McBride, W. Swierstra and I decided to give it a go.

I read the paper a few times. The first time when I did it, it was just skimming through, copying and pasting the code as I go. Then, after doing the examples and playing with them for a bit, I read it again thoroughly to try to understand the mathematical definitions and the corresponding Haskell definitions and how they map to each other.

As I was implementing the code, I made sure to make a Git tag of every chapter implementation. This made it easier to match code to its mathematical definitions in the paper for the corresponding chapter.

Besides my interest in dependent types, I also noted this learning process was interesting, thus this blog post 🙂

Even though papers take some time to read and digest, I highly recommend this paper to anyone interested in diving deeper into dependent types.

You can check the source code at

Bonus: Had some fun implementing TAPL exercise 3.2.4 in literate Haskell here.



If you can keep your head when all about you
Are losing theirs and blaming it on you,
If you can trust yourself when all men doubt you,
But make allowance for their doubting too.
If you can wait and not be tired by waiting,
Or being lied about, don’t deal in lies,
Or being hated, don’t give way to hating,
And yet don’t look too good, nor talk too wise:

If you can dream—and not make dreams your master;
If you can think—and not make thoughts your aim;
If you can meet with Triumph and Disaster,
And treat those two impostors just the same;
If you can bear to hear the truth you’ve spoken
Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
And stoop and build ’em up with worn-out tools:

If you can make a heap of all your winnings
And risk it on one turn of pitch-and-toss,
And lose, and start again at your beginnings
And never breathe a word about your loss;
If you can force your heart and nerve and sinew
To serve your turn long after they are gone,
And so hold on when there is nothing in you
Except the Will which says to them: “Hold on!”

If you can talk with crowds and keep your virtue,
Or walk with Kings—nor lose the common touch,
If neither foes nor loving friends can hurt you,
If all men count with you, but none too much;
If you can fill the unforgiving minute
With sixty seconds’ worth of distance run,
Yours is the Earth and everything that’s in it,
And—which is more—you’ll be a Man, my son!

Rudyard Kipling, 1910

Proving Monoids with Idris

I was watching an interesting video, Opening Keynote by Edward Kmett at Lambda World 2018. Note in the beginning of that video how in Haskell you’d put code comments in place of proofs.

Enter Idris.

As a short example, we can abstract a proven Monoid as an interface:

interface ProvenMonoid m where
	mappend : m -> m -> m
	mempty  : m
	assocProof : (a, b, c : m) -> mappend a (mappend b c) = mappend (mappend a b) c
	idProofL   : (a : m) -> mappend mempty a = a
	idProofR   : (a : m) -> mappend a mempty = a

Do Nats make a Monoid? Sure they do, and here’s the proof for that:

implementation ProvenMonoid Nat where
	mappend x y = x + y
	mempty      = 0

	-- Proofs that this monoid is really a monoid.
	assocProof  = plusAssociative
	idProofL    = plusZeroLeftNeutral
	idProofR    = plusZeroRightNeutral

Note that plusAssociative, plusZeroLeftNeutral, and plusZeroRightNeutral are already part of Idris’ prelude. In the next example we’ll do an actual proof ourselves instead of re-using existing proofs.

What about List of Nats? Yep.

implementation ProvenMonoid (List Nat) where
	mappend x y = x ++ y
	mempty      = []

	-- Proofs that this monoid is really a monoid.
	assocProof [] b c      = Refl
	assocProof (a::as) b c = rewrite assocProof as b c in Refl
	idProofL a             = Refl
	idProofR []            = Refl
	idProofR (x::xs)       = rewrite idProofR xs in Refl

Some valid questions are:

  • Why should I care about monoids?
    • Edward covers this in the video at around 3:00.
  • Why should I care about proving monoids?
    • Let’s assume that there’s an algorithm running in production that relies on Monoids. What we don’t want to happen is associativity to break, because if it does then the “combine” function applied on all of the produced results will likely be wrong.


EDIT: Updated first example per u/gallais’s comment.