Peter Morris is visiting Glasgow, so we’re trying to get on with writing an Epigram Elaborator. Let’s walk through the layers of the Epigram system.
- Evidence There’s an underlying type theory, the Language Of Verifiable Evidence. We have a bidirectional typechecker which means it can be quite compact, even though it is fully explicit. Necessarily, we also have a partial evaluator for it. LOVE will be total when we roll out the plan in Crude but Effective Stratification. There’s no need for a termination oracle: typechecking is the whole of the law.
- ProofState There’s a representation of incomplete constructions in LOVE: a hierarchy of longnamed definienda, some of which have lovely definienses (and are coloured green), but others of which remain ‘holes’ (coloured orange). Crucially, there is a mechanism for updating the ProofState with more informative types and values downscope of wherever an act of definition takes place.
- Cochon There’s an API for ProofState update which is a bit like an interactive theorem prover, with commands for making progress by refinement. We call it Cochon, as an homage. We even have a teletype-interaction wrapper for that API, which is what you currently have to put up with when you try out our prototype. We intend to preserve this system as a kind of ‘inspection hatch’ to the ProofState.
- The Elaborator is a robotic expert user of Cochon. Given a (possibly incomplete) lump of Epigram source code, the Elaborator will operate the controls of Cochon to perform the corresponding construction.
- The Transducer translates a textfile as a structured task for the Elaborator. When the Elaborator has finished, the transducer interprets the resulting ProofState as a patch to the original textfile, indicating the progress that has been made.
What makes it possible for the Transducer to generate readable patches from the bowels of the ProofState? How does the Elaborator know what to do? We follow James McKinna’s discipline of problems-as-types. LOVE gives types to the underlying semantic objects we care about, but it is also capable of wrapping those types in phantoms which also explain the problem intended to be solved by constructing a whatever-it-is. We have types not only for things, but for a-particular-thing-denoted-by-this-lump-of-source. We enrich types to capture the problems that the Elaborator must solve.
I’ve also learned a lesson I’ve been teaching for years. We separate the source language namespace from the LOVE namespace. The Elaborator should not need to rely on choosing the correct α-variant of a problem to make sense of expressions involving names selected by humans. Again, we use phantom types to record in the premises of a problem type which human names are in scope and by what interface they refer to some LOVE construction. These phantom types which represent human acts of naming we call dub types.
Dub types, which I’ll specify precisely in a minute, record the human interface to things: that’s not only naming, but also stuff like which arguments are implicit. They do this by assigning to each named object a type scheme. Type schemes are first class objects, inhabiting a type called Scheme which is defined by induction-recursion and (as of today) hardwired into the system, until we get around to implementing I-R more generally. When we stratify, Scheme needs to be large.
Scheme : Set
El : Scheme → Set
ty (S : Set) : Scheme
El (ty S) = S
pi (S : Scheme)(T : El S → Scheme) : Scheme
El (pi S T) = (s : El S) → El (T s)
impi (S : Set)(T : S → Scheme) : Scheme
El (impi S T) = (s : S) → El (T s)
A Scheme built by impi records the human preference that a particular argument to a function be inferred by default, but the underlying theory has no notion of function with implicit argument.
Contrast this with Agda, where {s : S} → T is a Set in its own right, distinct from (s : S) → T. I know that the Agda choice is a bad one, because I made that choice in Epigram 1. One kind of trouble you face is when inferring some ?X : Set from the type of some expression that ought to inhabit ?X: you might think that it’s open and shut, taking ?X to be the type inferred for the expression, but no, it might be that ?X has to be {s : S} → T for some T which unifies with the type of the expression. Given this setup, you just have to make the flaky choice of committing to the ‘obvious’ solution, even if it’s unforced, or you just get stuck. It’s part of what makes implicit syntax such voodoo. I may have been guilty of it, but at least I’m fessing up and trying to fix it.
The point, here, is that we distinguish inferring sets in the course of following a Scheme from inferring schemes. We don’t expect the latter to happen in the normal course of events. We shouldn’t make the machine try to guess where implicits might be needed; we should only make it guess the values of those implicits when it’s clear they’re needed.
Any road up, once we know what a Scheme is, we can say what a dub type is.
(’x’ : S = s) : Set
if x is an identifier in the human namespace, S : Scheme and s : El S
Dub types are trivially inhabited by [], the ‘nil&rsquo s-expression.
The basic problem type is type is to construct an inhabitant of an underlying type from an expression in the checkable fragment of the source language.
‘t’ <| T : Set if t is a source language term and T : Set
It’s inhabited by [t] if t : T. That’s a singleton list s-expression. (Er, um, I write postfix projections as integers, where a non-negative n means cadnr and -n just means cdnr. So a postfix 0 projection will unpack the solution [t] to that problem type as the underlying construction t.)
Here’s an example. If the Elaborator is faced with the problem of elaborating a source-language lambda
‘\x -> t’ <| (s : S) → T s
it speculates a subproblem (I’ll use do-block notation for that, because it’s the make-me-one-of-these monad) and constructs a solution
f ← (s : S) → (’x’ : ty S = s) → ‘t’ <| T s
return [λs → f s [] 0]
The subproblem type explicitly records the connection between the user’s name choice and the underlying argument. The win is that the elaboration is directly expressed just as a refinement step in the underlying proof system, with no bookkeeping outside of the types themselves.
The way to solve a problem of type (s : S) → … is to introduce the hypothesis and carry on, so we end up with a context that records both the stuff that is available and, via dub-typed hypotheses, the mapping of source-level names to some of that stuff.
Let’s build a bit more. The inferrable terms embed in the checkable terms. An inferrable term itself represents the problem of constructing a type-value pair
‘e’ : Set if e is a source language inferrable term
[T, t] : ‘e’ if T : Set and t : T
We can elaborate the embedding from inferrable to checkable like this:
‘e’ <| T does
Ss ← ‘e’
Q ← Ss 0 ↔ T
return [coe (Ss 0) T Q (Ss 1)]
That is, the Elaborator speculates a proof that the inferred type matches the demanded type. Some other agency will actually try to produce a witness for Q. It’s enough for the Elaborator to ask for one. The ‘coe’ operator coerces via Q to fix up the type discrepancy.
The key form of inferrable term is a variable applied to a spine of checkable arguments.
‘f ss’, if a hyp proves the dub type (f : R = r), does
[T, t] ← Spine R r ’ss’
return [T, t]
and I’d better make clear that
Spine R r ’ss’ : Set if R : Scheme, r : El R, and ss is a sequence of checkable terms
and that
[T, t] : Spine R r ’ss’ if T : Set and t : T
We can use the same language of refinement to explain how to manage the invocation of a function whose interface is explained by a type scheme.
Spine (ty S) s ” does return [S, s]
Spine (impi S T) f ’ss’ does
s ← S
[U, u] ← Spine (T s) (f s) ’ss’
return [U, u]
Spine (pi S T) f ’s ss’ does
s ← Sch S ’s’
[U, u] ← Spine (T (s 0)) (f (s 0)) ’ss’
return [U, u]
but I’d better add that
Sch S ’s’ : Set if S : Scheme and s is a checkable term
[s] : Sch S ’s’ if s : El S
and build the other side of the piece
Sch (ty S) ’s’ does [s] ← ’s’ <| S; return [s]
Sch (impi S T) ‘t’ does f ← (s : S) → Sch (T s) ‘t’; return [λ s → f s 0]
Sch (pi S T) ‘\x -> t’ does
f ← (s : S) → (’x’ : S = s) → Sch (T s) ‘t’
return [λ s → f s [] 0]
That’s just a wee bit of the story, but it fits with the general plan: make sure that all of the problem-solving in the system is mediated via type inhabitation problems with a type-directed refinement strategy. Problem types are phantom types which associate the underlying semantic types with the syntax intended to deliver the semantic objects. Dub types and schemes manage source-level naming and implicit syntax conventions. The ProofState already supports incremental construction; we thus acquire incremental elaboration.
“Sch S ’s’ : Set if S : Scheme and s is an inferrable term
[s] : Sch S ’s’ if s : El S”
did you mean “checkable term” instead?
Andrea, you’re quite right, I did. I was half asleep by then. Fixed it now. Thanks
Neat, it’s nice to know i wasn’t confused.
By the way, is there something to say on how the dubbing assumptions (’x’ : S = s) get used? The post only shows their introduction.
In a formal sense, dub types are unit types, so their introduction is trivial and their elimination isn’t interesting. Even so, the rule for elaborating the application of a function symbol f to a spine ss searches through the dubbing assumptions to figure out what f is supposed to mean. Their phantom presence makes a difference.
Entertainingly, dub types fit elimination-with-a-motive very nicely. The dub types in a problem get refined along with everything else, leaving the same identifiers in scope, but with more informative definitions. That way, we can reliably stack up a sequence of refinement strategies and be sure that the later strategies still make sense in the scope resulting from the earlier strategies. Once refinement’s done, the business of elaborating the next “left-hand side” is to transform the problem into an equivalent one with more dub types inserted, in accordance with the programmer’s use of pattern variables.
We used to have much more ghastly ways to work on this problem. Epigram 1 relied on choosing a particular α-variant of the goal to make the names in scope fit the programmer’s choices. That such an approach violates several of my own principles about meaning and naming should have been a warning to me earlier…