# Learning Agda by implementing matrix multiplication

Posted on April 14, 2019

Lately, I've been exploring the Agda programming language. Agda is a pure functional programming language with dependent types. One of Agda's biggest merits is its type system. As a programming language designed to be a proof assistant, Agda's type system is actually a consistent logical system. So, it is powerful enough to express complex invariants. I've decided to test Agda's strength by implementing type-safe matrix multiplication.

By the way, this HTML document was generated from a Markdown file written in literate Agda, so you can go to the source code of this website, grab the document, and compile the Agda code yourself!

## Motivation

Often, when we write code, we have certain preconditions, postconditions, and invariants that we want to enforce. Static typing is a way to prove properties about programs. For example, when one writes int x = 0;, the compiler ensures that x always holds an int. However, we also have more complicated requirements that we can't write out as types. For these cases, we write tests so that we can catch bugs. However, unlike types, tests do not prove the correctness of the program.

However, dependent types are a game-changer. With dependent types, values can appear in types, allowing us to express properties in types that we could only specify in comments and tests otherwise. If types are compiler-enforced documentation, dependent types take such documentation to their logical conclusion, turning informal invariants into formal ones.

A canonical example of such conditions are those of matrix multiplication. A precondition of the operations is that the number of columns of the left matrix equals the number of rows of the right matrix. A postcondition of the operation is that the resulting matrix has the same number of rows of the left matrix and the same number of columns as the right matrix.

The conditions of matrix multiplication might look something like this pseudocode:

(Matrix<T, i, j>, Matrix<T, j, k>) -> Matrix<T, i, k>
where T : Addable, Multipliable

Can I express these invariants as an actual type in Agda?

## The implementation

First, we import the required modules from the standard library:

### Algebraic structures

open import Algebra using (CommutativeSemiring)

Here, we import the commutative semiring from the Algebra module. When we think of vectors and matrices, we usually only think of them as containing numbers. However, what kind of numbers? The rationals? The reals? When one speaks of vectors, one speaks of "vector spaces over a field." A field consists of a set and some notion of addition, subtraction, multiplication, and division on the set.

However, we are only going to define the dot product and matrix multiplication, so we are dealing with modules, not vector spaces. While vector spaces are parameterized over a field, modules are parameterized over a ring, which has addition, subtraction, and multiplication, or a semiring, which has addition and multiplication. We only need addition and commutative multiplication of scalars, so we only need a commutative semiring. (The commutativity ensures that the dot product is commutative, so whether we multiply row by column or column by row is an implementation detail.)

Vectors, modules, fields, rings, and semirings are examples of algebraic structures, or abstract specifications of operations on sets and their associated laws. Instead of dealing with numbers and other concrete data, as in high school algebra, the subject of abstract algebra studies algebraic structures. The results of abstract algebra can apply to any concrete operations that have the relevant algebraic structure.

By working with commutative semirings, we allow our matrix operations to work with any type that defines a notion of addition and commutative multiplication. In the context of programming, algebraic structures are like interfaces that specify operations that type parameters must support.

### Natural numbers

open import Data.Nat using (ℕ; suc)

Next, we import the natural numbers. Agda uses the Peano definition of the natural numbers, where a natural number is either:

• 0
• 1 + a natural number ("suc")

Thus, the natural number is an inductive data type.

### Product types

open import Data.Product using (_×_; _,_)

As an example relating to abstract algebra, the idea of "product" isn't only for numbers! A multiplication operation is defined on types as well. The product of two types is the Cartesian product. Sounds complicated? Don't worry, the Cartesian product is merely the type of pairs! For example, (5 , true) has type ℕ × Boolean. We will use pairs to achieve multiple return values. Simple enough, right?

### Vectors

open import Data.Vec using (Vec; _∷_; []; foldl; map; zipWith)

The final import is the vector. The word "vector" is heavily overloaded in programming, but here, a vector is a homogeneous sequence of values with a fixed length that appears in its type. The vector is indexed on its size.

A vector of A is either:

• empty ([]), in which case it has type Vec A 0.
• an element followed by a vector (head ∷ tail), in which case it has type Vec A (suc n), where head has type A and tail has type Vec A n.

Lispers, MLers, and Haskellers may find this definition familiar, as a vector is like a linked list that carries its size in its type. Notice that the definition of a vector also somewhat mirrors the definition of a natural number. The vector is probably the poster child of dependent types.

Think of a vector as like a mathematical vector (a vector space over a field). After all, we are programming with matrices right now!

### Matrices

Matrix : ∀ {a} (A : Set a) (rows : ℕ) (cols : ℕ) → Set a
Matrix A rows cols = Vec (Vec A cols) rows

Here, we define a matrix as a type alias for a vector of vectors. Because types are first-class in Agda, this is just a regular value definition. Notice:

• The element type, A, is a regular parameter to the function.
• The output of the function is a type.
• The dimensions, rows and cols, appear in the returned type.

The a deals with universe polymorphism. Type theory arose as an alternative to naive set theory after Bertrand Russell discovered his famous paradox. As consequence of Russell's paradox, types must form a hierarchy. Set does not have type Set; rather, Set n has type Set (suc n), where n and suc n are levels. Thus, each level of type lives in a different universe. Universe polymorphism allows for code that works with types of all levels.

### Helper functions

fill-empty : ∀ {a} {A : Set a} → (n : ℕ) → Vec (Vec A 0) n
fill-empty 0 = []
fill-empty (suc n) = [] ∷ fill-empty n

This function is straightforward. Given a type A and a natural number n, fill-empty n outputs a vector of length n whose elements are empty vectors of A. Notice that the base case is when the number is 0 and the recursive case is when the number is suc n for some n, where the argument to the recursive call is n. This is the standard structural recursion that is frequent when processing inductive data such as the natural numbers or vectors.

prepend : ∀ {a} {A : Set a} {m n : ℕ} → Vec A n → Vec (Vec A m) n →
Vec (Vec A (suc m)) n
prepend [] [] = []
prepend (x ∷ xs) (vec ∷ vecs) = (x ∷ vec) ∷ (prepend xs vecs)

prepend col matrix takes col, a vector of As, and matrix, a vector of vectors of A, and prepends each element of col to the corresponding element of tails. Therefore, it essentially prepends a column vector to a matrix. The resulting matrix has one more column than tail; that is to say, each row is one element longer. Notice that the types reflect these changes; compare the parameter type Vec (Vec A m) n to the return type Vec (Vec A (suc m)) n.

split : ∀ {a} {A : Set a} {m n : ℕ} → Vec (Vec A (suc m)) n →
(Vec A n) × Vec (Vec A m) n
split [] = ([] , [])
split ((x ∷ vec) ∷ matrix) =
let xs , vecs = split matrix in
x ∷ xs , vec ∷ vecs

split is the inverse of prepend. Given a matrix, it slices off the left column and outputs a pair consisting of the column and the rest of the matrix. Notice that because of the suc m in the type, the matrix must have a column. The type of an empty row would be Vec A 0, where 0 would fail to unify with suc m. This is one example of dependent types expressing important invariants.

I had trouble writing split until I used prepend as a guide. Because prepend and split are inverses, I realized that to define split, I just needed to invert the definition of prepend.

• prepend pattern matches (x ∷ xs) (vec ∷ vecs). split returns x ∷ xs , vec ∷ vecs
• prepend returns (x ∷ vec) ∷ (prepend xs vecs). split pattern matches on ((x ∷ vec) ∷ matrix), where matrix corresponds with the output of prepend xs vecs. split then applies split matrix and pattern matches it into xs , vecs, corresponding with the arguments of prepend xs vecs.

The declarative, definitional style of pure functional programming can make code very easy to read.

### Matrix transposition

transpose : ∀ {a} {A : Set a} {i : ℕ} {j : ℕ} → Matrix A i j → Matrix A j i
transpose {_} {_} {0} {j} [] = fill-empty j
transpose {_} {_} {suc _} {_} (row ∷ rows) = prepend row (transpose rows)

transpose transposes a matrix. Notice that dimensions i and j switch places in the input and output types! See how expressive dependent types are?

module OverSemiring {c ℓ} (Ops : CommutativeSemiring c ℓ) where

In Agda, modules are both a way to namespace values and to parameterize them under common parameters. Here, the module OverSemiring is parameterized by a commutative semiring called Ops. In the Agda standard library, algebraic structures are represented as records. Because the dot product and matrix multiplication use the operations of the scalar's semiring, we will define them within this module.

  open CommutativeSemiring Ops

Incidentally, records are also modules. Opening Ops brings the scalar type (Carrier), addition (_+_), multiplication (_*_), zero (0#) and one (1#) as defined by the semiring into scope.

### Dot product

  _∙_ : ∀ {n : ℕ} → Vec Carrier n → Vec Carrier n → Carrier
_∙_ {_} lhs rhs = foldl (λ {_ → Carrier}) _+_ 0# (zipWith _*_ lhs rhs)

This is the definition of the dot product. Notably, the accumulator type is actually a function of the vector length, hence the lambda expression that ignores its argument and returns Carrier.

### Matrix multiplication

  multiply : ∀ {i : ℕ} {j : ℕ} {k : ℕ} →
Matrix Carrier i j → Matrix Carrier j k → Matrix Carrier i k

Now, for the meat of the program, here is type-safe matrix multiplication!

  multiply {0} {_} {_} [] _ = []

When i is 0, the resulting matrix has no rows, so we return an empty vector.

  -- Pattern match on (suc _) to get rid of the "white smoke" warning
multiply {i@(suc _)} {_} {0} _ _ = fill-empty i

When k is 0, the resulting matrix has no columns, so we return a vector of i empty rows.

  multiply {suc i} {_} {suc k} (lhs-row ∷ lhs-rows) rhs =

When neither i nor k are 0, we handle the recursive case. We case-split the left operand into the top row, lhs-row, and the rest of the matrix, lhs-rows.

    let rhs-col , rhs-cols = split rhs in

Next, we split the right operand into its left column, rhs-col, and the rest of the matrix, rhs-cols.

    let matrix = multiply lhs-rows rhs-cols in

We recursively call multiply, finding the result matrix without its top or leftmost rows.

    let result-top-left = lhs-row ∙ rhs-col in

Then, we find the dot product of the left operand's top row and the right operand's top column. This dot product is the scalar in the top-left corner of the result matrix.

    let result-left-col = map (_∙_ rhs-col) lhs-rows in

Here, we obtain a vector of the dot products between the leftmost column of the right operand and each of the rows of the left operand, minus the top row. This vector will become the left column of the result matrix, except for the top-left corner.

    let result-top-row = map (_∙_ lhs-row) (transpose rhs-cols) in

Similarily, we obtain a vector of the dot products between the top row of the left operand and each of the columns of the right operand, save the leftmost column. This vector will become the top row of the result matrix, except for the top left corner. Notice thd transpose rhs-cols. A matrix is represented as a vector of rows, but we need a vector of columns.

    (result-top-left ∷ result-top-row) ∷ (prepend result-left-col matrix)

Finally, we put together the resulting matrix.

### Some tests

And, finally, here are some tests:

module _ where
open import Data.Nat using (_+_; _*_)
open import Data.Nat.Properties using (*-+-isCommutativeSemiring)
open import Level using (0ℓ)
open import Relation.Binary.PropositionalEquality using (_≡_; refl)

Agda has a "propositional equality" type called "_≡_" that states that the two sides of the equal sign are equal. refl has type a ≡ a, meaning that it is a proof of equality when the two sides of the equal sign have the same normal form. Therefore, equality checking is done as part of typechecking, that is, during compile time!

  ops : CommutativeSemiring 0ℓ 0ℓ
ops = record
{ _+_ = _+_
; _*_ = _*_
; 0# = 0
; 1# = 1
; _≈_ = _≡_
; isCommutativeSemiring = *-+-isCommutativeSemiring }
open OverSemiring ops

This is the commutative semiring structure that the natural numbers have. The isCommutativeSemiring member contains proofs that the proper laws hold.

  identity-2 : Matrix ℕ 2 2
identity-2 =
(1 ∷ 0 ∷ []) ∷
(0 ∷ 1 ∷ []) ∷ []

some-mat : Matrix ℕ 2 2
some-mat =
(1 ∷ 2 ∷ []) ∷
(3 ∷ 4 ∷ []) ∷ []

some-mat-trans : Matrix ℕ 2 2
some-mat-trans =
(1 ∷ 3 ∷ []) ∷
(2 ∷ 4 ∷ []) ∷ []

test : multiply some-mat identity-2 ≡ some-mat
test = refl

test2 : transpose some-mat ≡ some-mat-trans
test2 = refl

left-mat : Matrix ℕ 2 3
left-mat =
(1 ∷ 2 ∷ 3 ∷ []) ∷
(4 ∷ 5 ∷ 6 ∷ []) ∷ []

right-mat : Matrix ℕ 3 2
right-mat =
(7 ∷ 8 ∷ []) ∷
(9 ∷ 10 ∷ []) ∷
(11 ∷ 12 ∷ []) ∷ []

product : Matrix ℕ 2 2
product =
(58 ∷ 64 ∷ []) ∷
(139 ∷ 154 ∷ []) ∷ []

test3 : multiply left-mat right-mat ≡ product
test3 = refl

## The verdict

I had a wonderful experience using Agda to write this code. Dependent types flowed naturally with my code's structure and didn't inconvenience me. As long as I wrote my code correctly, the types didn't get in the way. Of course, whenever I wrote something incorrect, the typechecker complained, as it was supposed to.

Agda is very much an interactive language. It is intended to be written using an Emacs mode that essentially serves as an IDE. One important feature is typed holes. When one isn't sure how to write something, the programmer can place a question mark and it will turn into "hole" that lists the expected type and the current names in scope. Thus, programming in Agda can turn into a game of "fill in the blank with these hints." As I wrote this program, I occasionally used typed holes when I got stuck. The Emacs mode also has shortcuts for typing common special characters.

A negative of Agda is poor standard library documentation. Currently, it's literally the source code with clickable names. In addition, Agda supports custom mixfix operators with Unicode characters; although these operators allow for one to replicate mathematical notation, they are also hard to search up. Therefore, discoverability is poor.

I'm not sure how performant Agda is, but as a pure functional language, it isn't appropriate where mutation is a must-have. In addition, as a theorem-proving language, a lot of definitions in Agda are intended for ease of manipulation mathematically, not performance. Note that there is an alternate standard library for programming. Truthfully, though, Agda is more for proving than for programming.

There is a similar programming language called Idris that is aimed more at general-purpose programming.

I'd say that type theory heavily intersects with pure math. Can dependent types achieve mainstream acceptance, or are they too hard to understand? For what it's worth, even though I like type theory and the surrounding math, I don't have the formal mathematical training. Yet, I managed to write this program. I'd wager that it's possible to teach dependent types in a practical way.

Agda is a beautiful programming language, and dependent types are an insanely powerful tool for ensuring program correctness. I hope to become use Agda more and become fluent in it, and I think that its ideas deserve more mainstream awareness.