Let me try to make a bit of high-level sense about Prop in Epigram 2. It’s quite different from Prop in Coq, it does a different job, and it sits in quite a delicate niche.

Firstly, Prop is *explicitly* a subuniverse of Set. If P is a Prop, then :- P is the set of its proofs, but P itself is not a type. (Are propositions types? How, functionally, should the Curry-Howard “isomorphism” be manifest?) Note that :- is a *canonical* set constructor; it is preserved by computation. Hence you know a proof set when you see one.

Secondly, we squash proof sets. All inhabitants of proof sets are considered equal, *definitionally*. We have that :- P ∋ p ≡ p’, and we can easily see when this rule applies, just because :- is canonical. Contrast this with Coq, where Prop is *silently* embedded in Type: that makes it harder to spot opportunities to squash proofs.

And that’s kind of it. Prop is a subuniverse of Set that’s safe to squash, even when computing with *open* terms. There’s nothing we do with Prop that we couldn’t do more clunkily in Set. It is, in effect, an optimization. However, the pragmatic impact of this optimization is considerable. More on what we’ve bought in a bit; let’s check that we can afford it.

### Prop is Lazy

What’s the price of squashing? *Laziness*. Computation must not discriminate between canonical and non-canonical proofs of the same proposition, because they’re supposed to be *equal*. Strict elimination of props over sets chucks away transitivity of definitional equality, and thence type preservation. We must learn to be lazier, and that’s actually quite hard work.

The trouble is that strictness is how we protect computation from *liars*: remember that when we work with open terms, all propositions are provable by appeal to false hypotheses. That’s why you can’t get proof irrelevance on open terms from your run-time proof *erasure* mechanism: to do so corresponds to ‘swallowing’ whatever lies you’re told, and if that’s Nat = Nat → Nat, say, you could be in trouble. To discard strictness, we need to find more subtle *trust* mechanisms.

Which propositions eliminate over sets, anyway?

*Absurdity*, for a start. Here, the trust mechanism is the simple one we use for politicians. Absurdity elimination is strict in that it demands a canonical proof. It’s also lazy in that it computes as much as possible (i.e., not at all) without being given a canonical proof. No worries there.

*Conjunction* and *universal quantification* don’t need to eliminate over sets in general: projection and application, respectively, are all you need. By identifying all proofs, we get rid of the need for *dependent* eliminators which show you the internal structure of proofs: you don’t need to know.

*Equality* eliminates over sets: that’s how you turn your vector of length n+m into a vector of length m+n. But you don’t want to turn your sheep into a pen, even if you are outstanding in your field. In Epigram 1, coercion between equal types was strict in the equality proof, and that was a *bug*. As it were,

coe : (S T : Set) → S == T → S → T

coe _ _ refl s = s

but for some hypothetical Q : S == S, we did not have coe S S Q s = s, even though by proof irrelevance, Q ≡ refl. Yuk. Sorry.

In early Epigram 2 attempts, we got around this by asking ‘would refl prove this equation?’, not ‘is *this* proof refl?’.

coe S T Q s = s if Set ∋ S ≡ T

(Of course, we did check for refl first, as an optimization, but the backup plan is to run an equality test.) This meant that evaluation and equality were mutually defined, and that evaluation had to thread a name supply to allow η-expansion in those equality tests. I think this approach may have been part of the (dead?) plan for proof irrelevance in Coq, where η-expansion isn’t available anyway. For us, it worked but it sucked.

But it turned out (see ancient posts) that the machinery we built for extensional equality gave us a slightly lighter trust mechanism, just enough to combine laziness with paranoia. At each stage, coe checks just the head constructors of the allegedly equal types and if they match, it computes recursively under just the head constructor of the value being transported. That is, coe trusts incrementally and computes incrementally.

Note: coe’s inspection of set structure does not prevent *run time* type erasure, because we also erase coe. Note also that set equality is structural, rather than up-to-iso as in homotopy calculi. Although I’m generally of the opinion that counting a set’s elements is not the same as understanding it, I’d like to see more support for working up-to-iso, but it’s clear that witnesses to isomorphism, e.g. not : Bool → Bool, must persist at run time, so it will still be pragmatically of value to notice which of those isomorphisms happen to do bugger all. But I digress.

*Accessibility* and similar notions used in termination proofs eliminate over sets, and that’s rather the point: they justify that programs are ok. We need a trust mechanism: would you run a program with a *placeholder* for its termination proof? Lieutenant Wellordering means well, but can still accomplish by laziness and gullibility what General Recursion just does by sheer incompetence. I don’t know of a trust mechanism for accessibility other than strictness in the proof. I fear that if you don’t look at the proof, you’re left with the Halting Problem.

At the moment, we don’t know how to allow accessibility and friends in Prop, so we keep them in Set. Note: we can still erase termination evidence at run time, just not when computing with open terms—its only value is in establishing trust, and we get that from consistency when we compute with closed terms. The desire (presumably for compatibility’s sake) to keep this stuff in Prop is a dealbreaker for proof irrelevance in Coq. But perhaps they could cut out a lazy, proof irrelevant sub-Prop…

### The Goods

What have we bought with this careful renegotation of trust? Firstly, laziness gives us *canonicity* (a term I learned from Martin Hofmann). Every normal form inhabitant of a datatype has either a head constructor or a free variable. Moreover, this canonicity is robust to extension with consistent propositional axioms. Of course, lots of your favourite axioms (proof irrelevance, functional extensionality, …) just hold anyway, but you can add others (parametricity?) without spannering up the engines.

Next, we’ve bought the right to *automate proof search*. Maybe you don’t care and do it anyway, but if you solve A ⇒ A ⇒ A by automation, you can get two different proofs even in the empty context, and lots more in open terms. In Coq, your machines can make the ‘wrong’ choice. Squashing the proof sets means it cannot matter which proof is chosen. The very least we can do for you is knock off a propositional subgoal if there’s a proof of that very proposition in scope. I don’t know if you remember this old dream, but it’s about to come true, only ten years later.

Now, a consequence of *extensionality* is that we can use propositional equality to manage type constraints. In our bidirectional setting, whenever we infer s ∈ S, but need to check T ∋ s, we are free to speculate a proof Q that S equals T and fill in coe S T Q s as the intended inhabitant of T. If S and T are definitionally equal, Q can be solved by refl (and the coercion will vanish); if the context has a proof of the equation, Q can be solved by hypothesis; if both, who cares?

Extensionality allows a compositional approach to solving these equations, even if this involves going under a binder. To check λx→s == λx→t, check s == t for hypothetical x. How do you deduce the former from the latter, if not by extensionality? If you do ‘show constraints’ in Agda, you’ll see that type constraints are not inhabitation problems and fester in their own separate trench, out of range of our machine guns. They’re in trouble now…

So, the game is to pick off propositional obligations, e.g. type constraints, by proof search. If that game is to be complete, we had better ensure that you can stack the odds in your favour, by loading the local context with the proofs that you might need. How about bringing them into scope with a good old **where**-clause? That would pull the ‘trust’ story out of the works of the ‘computation’ story, but not so far that you can’t find it.

### Proof Style

We should also ask what a proof looks like. And if you say ‘a program’, I shall respectfully disagree. When you construct an inhabitant of Nat by applying a function in Nat→Nat→Nat, it really matters which function you choose, hence it is good if the expression denoting the inhabitant is explicit about that. When you deduce C from A→B→C, what’s important is the *statement* of the lemma you used, and the fact that you must now show A and B. We should write *propositions*, not the inhabitants of their proof sets. I’m arguing for *declarative* proof, in the spirit of Mizar.

C because A→B→C where

A because …

B because …

Up to punctuation, I’m particularly arguing that the difference between proofs and programs should be that in the former, we replace elements of proof sets by the statements of the propositions themselves. So not

plus0 : (x : Nat) → x + 0 == x

plus0 x ⇐ induction x

plus0 0 = refl

plus0 (S x) = cong S (plus0 x)

but something like

(x : Nat) ⇒ x + 0 == x

x + 0 == x ⇐ induction x

0 == 0 trivial

S (x + 0) == S x because x + 0 == x

There’s a lot of design still to do. I don’t have a semantics for ‘because’ yet, or a sense of which things will be obvious enough to be omitted. But I do know that if you replace every variable whose type is :- P by an assertion of P to be elaborated by search, searching the context is enough to fill in the blanks. It amounts to writing the types down and doing program inference, or writing the proof *tree* and recovering the proof *tree* by elaboration.

### Wanton Speculation

Where does all this leave us? In search of a declarative Epigram Proof Language, which shares the structure of the programming language, but has propositions where left-hand sides and expressions used to go. It’s my suspicion that no language or system will definitively supplant either Haskell or Coq unless it clearly supplants both. We should think hard about how to do that, and that means figuring out how programming and proving are different, as well as how they are similar. This Prop, with its ‘lazy trust’ character supporting proof irrelevance and extensionality, is at least one way to get these issues on the table.