## What I Learned This Week: Coinduction, Dependent Pattern Matching, and Langsec

# Introduction

I’m starting a small series. The gist of it? What I learned in the past week.

Each post will be a brief summary of some things which I studied in the past week. Not necessarily an in depth exploration of the topics, but you may find useful insight, or at least useful references if something is a topic of interest.

If you find a mistake in my understanding, or have a question, feel free to contact me!

With that out of the way, let’s talk about some of the things I read about this week. Coinduction in Coq, dependent pattern matching in Coq, and some papers in langsec.

# Coinduction

I had messed with coinductive data types in the context of Idris before, while going through Type-Driven Development with Idris. I haven’t used them in Coq however, and I ended up reading about this in the coinductive chapter in CPDT. While going through this a couple of things clicked for me.

With my first pass through Type-Driven Development with Idris it was
not immediately obvious to me why coinductive definitions have to be
productive. It makes a lot of sense when you start thinking about
them in terms of, say, generators in Python. For instance a stream
of values is essentially just a structure for producing “next”
values in a sequence. If it’s not known whether or not you will get
a next value in the sequence (e.g., if you `filter`

a stream), then
it’s possible that asking for the next value in the sequence would
not terminate.

Coq uses the guardedness condition, which is a syntactic check to guarantee that coinductive definitions are actually productive. This is very similar to how Coq normally uses simple structural recursion to check for termination of definitions, with the same goal of preventing inconsistencies in the logic.

# Dependent Types and Dependent Pattern Matching in Coq

I have been working on a project using dependent typing in Coq. I have previously worked with dependent types in Agda and Idris, so I thought that adding dependent types to everything in Coq would be easy peasy! I was wrong.

As it turns out Coq does not include the so called “Axiom K” which makes working with dependent types much easier. I don’t actually know too much about what “Axiom K” is, or what it does yet, but my understanding is roughly that it allows for some coercion between types. I also assume that while you can add “Axiom K” to Coq, it’s probably still not as convenient as working with dependent pattern matching in something like Agda which has this assumption built in, so it can take this into account to give nicer syntax and whatnot. I could be wrong about these assumptions (let me know!) but this was my understanding from one of the student talks on this topic at the DeepSpec Summer School in 2017.

Anyway! So, what’s the problem I ran into? Well, in my project I figured I would work a lot with vectors – list types which depend upon the length of the list as well. I’m using the builtin Coq vector library, but a vector basically looks like this:

```
Inductive Vec A : nat -> Type :=
| nil : Vec A 0
| cons : forall (h:A) (n:nat), Vec A n -> Vec A (S n).
```

Since the `Vec`

type keeps track of the length of the vector it can
very conveniently prevent out of bounds array access, and simplify
some functions which would normally be partial, like `hd`

.

Somewhat foolishly I assumed that this would make a lot of the proofs in my project easier. After all a vector type has more information than a list type, which is more information that you have access to for a proof.

However, while this is true, a vector also has more restrictions than a list type, and sometimes these restrictions can get in the way. I was somewhat surprised to find that this even had an effect on the theorems that I was allowed to write down.

For instance this theorem

```
Theorem vec_n_0_nil :
forall {A n} (v : Vec A n),
n = 0 -> v = nil A.
```

Coq complains, rightfully so, that `nil A`

is going to have type
`Vec A 0`

, but `v`

has type `Vec A n`

.

```
Error:
In environment
A : Type
n : nat
v : Vec A n
The term "nil A" has type "Vec A 0" while it is expected to have type "Vec A n".
```

Coq won’t take into account our hypothesis that `n = 0`

when trying
to unify the types, and I currently have no clue how to do
this. Often these theorems can be rewritten to something reasonable,
however:

```
Theorem vec_0_nil :
forall {A} (v : Vec A 0),
v = nil A.
```

Which is probably the way to go, since you should be able to rewrite with a hypothesis in most places.

`sig`

I encountered the `sig`

type unexpectedly while working on my
project. It was simple enough to work with, but I wasn’t sure what
it meant exactly…

It means sigma. It’s a sigma type. So, it handles existential
quantification. There’s some really good documentation about how to
use `sig`

for subtypes in this CPDT chapter. In particular, I like
the use of Coq’s notation features to almost completely eliminate
the overhead of using subset types in Coq.

## Pattern Matching

There’s a couple of tricks to pattern matching with dependent types in Coq, since it doesn’t have as much magic as Agda. One thing that wasn’t immediately obvious to me was how to deal with “absurd” branches, where a branch in a match would lead to a contradiction, so it should be ignored.

In Agda you might use the absurd pattern to deal with branches of a
program which should never be executed due to a condition which
will never hold. I.e., if the condition to enter the branch is
true, then that condition implies `False`

.

```
hd {X} (xs : list X) (length xs > 0) : X
```

For instance, in the case of a `hd`

function, which takes the first
element of the list, you might have an argument that’s a proof that
the length of the list is greater than 0. But, you still have to
provide a case to the match on the list for when it’s empty, after
all, otherwise you’ll have an incomplete pattern match.

Well, that’s okay. Because in Coq we can use an empty pattern match on False. So you might think you can do something like this:

```
Require Import List.
Lemma length_nil_not_gt_0 :
forall {X}, @length X nil > 0 -> False.
Proof.
intros X H. inversion H.
Qed.
Definition hd {X} (xs : list X) (pf : length xs > 0) : X :=
match xs with
| nil => match length_nil_not_gt_0 pf with end
| h :: t => h
end.
```

But this still doesn’t quite work.

```
Error:
In environment
X : Type
xs : list X
pf : length xs > 0
The term "pf" has type "length xs > 0" while it is expected to have type
"length nil > 0".
```

Coq does not seem to be recognizing that in this branch `xs = nil`

,
and so it’s not replacing `xs`

with `nil`

when it’s typechecking.

So, how can we get around this? What seems to be happening is that
`pf`

is getting tied to the type `length xs > 0`

at the beginning,
at the very top level, where `xs`

is left very general.

The way to get around this is to make `hd`

actually return a
function, which takes a proof as an argument. This lets the `pf`

have a different, more specific type in each branch.

```
Require Import List.
Lemma length_nil_not_gt_0 :
forall {X}, @length X nil > 0 -> False.
Proof.
intros X H. inversion H.
Qed.
Definition hd {X} (xs : list X) : (length xs > 0) -> X :=
match xs with
| nil => fun pf => match length_nil_not_gt_0 pf with end
| h :: t => fun _ => h
end.
```

I’m still getting my head fully around `match`

annotations, but I
have found the following resources useful on my quest to
enlightenment:

# Timing Sensitive Garbage Collection

I read this paper on timing sensitive garbage collection. The problem that the paper is aiming to solve is the fact that you can leak information through timing differences in garbage collection.

Simply put, if you branch on a bit of information, and one branch allocates a bunch of memory on the heap, while the other doesn’t, a lower security level section of the program can cause the garbage collector to trigger depending on the branch taken, and timing the amount of time that this takes can reveal a bit of information from the higher security level section of the program.

Separate heaps are used for each security level, and garbage collection of a heap is only allowed when the program is currently in the same security level. This stops a low section of the program from triggering collections based on what’s in a high heap.

Ultimately the result is that you can’t observe different times for anything running at a higher security level.

One aspect of this which I do find problematic is the `at`

command,
which bounds the execution time. If it takes less time to execute
the `at`

block than the execution time, then the time to complete
the execution of this is padded out. I.e., it will always take the
same amount of time to get through the `at`

command. While this is
inefficient, I’m not sure there’s much you can do about that while
maintaining the same security guarantees. However, what I do find
slightly more problematic is when the execution time exceeds the
bound — the program just crashes, since execution can’t continue
while keeping the security guarantees. After all, if you could tell
that the `at`

command did not finish in time you could use that to
leak information to the lower security section of the program. The
paper does briefly mention that it doesn’t handle this, or similar
problems like heap exhaustion. It instead suggests that one should
be able to use tangential techniques for handling “termination or
progress-sensitive security”.

That said, a nice aspect of the `at`

command is that the bound can
be any expression, and does not have to be a constant value. This
means that you can calculate an upper bound dynamically, which on
one hand gives you a lot more leeway while allowing you to make your
bounds as tight as possible for efficiency. However, I suspect that
this aspect is itself fairly complicated to reason about, and making
this an arbitrary expression could mean that problems with the
bounds could be complicated and difficult to spot while
testing. Although, the thing that you’re timing is of course Turing
complete anyway, so maybe it doesn’t matter that the bounds can
change according to some arbitrary calculation either.

I’m very curious how this plays out in a real world implementation. Compiler optimizations could make it more difficult to judge how long things will take, and it’s possible that the bounds could be fairly complicated as timings could be very different depending on whether or not your array fits in cache, memory, or swap. These bounds may have to be different for different computers, and may be subject to interference from different processes.

I do like this, though! It’s something that comes up with a lot of security features that operating systems can provide anyway – if the system sees something that means it can no longer guarantee security, then maybe it should just crash to ensure the user’s safety! I suspect that crashing is sufficient to leak a bit of information each time the program executes, though, as long as it’s able to talk to the outside world, but this is still a marked improvement over the exploit pointed out in the paper which would let the program leak a bit of high security information per iteration of a loop.

# Productivity

One thing that I have learned this week is that pdf-tools for Emacs is invaluable for getting through papers. Having everything within Emacs that I need to get through is much less distracting, and being able to easily write notes in org-mode as I go is a huge win.

Trying to go through a paper in Zathura with Emacs on the side, or god forbid a PDF viewer in a browser, was sufficiently distracting, and ruined the flow of reading through the paper just enough.