Time flies like an applicative functor

June 26th, 2009

This post is in the garden path style. It’s literate Haskell. Preramble:


> import Control.Applicative  -- gosh!
> import Unsafe.Coerce        -- eek!

Suppose I give you an applicative functor


< instance Applicative De

I'm not going to tell you what it is, but I am going to give you this handy fixpoint operator. Of course, I'm going to get cross if you use other kinds of general recursion.


< fix :: (De x -> x) -> x

I’m also going to define two fixpoint type constructors, and relate one to the other:


> newtype Nu f = Con (f (Nu f))
> newtype NuDe f = ConDe (f (De (NuDe f)))

< splat :: NuDe f -> Nu f

What’s going on? ‘De’ is for ‘delay’ — Read the rest of this entry »

Planet Haskell?

June 23rd, 2009

Chatting with dons, I get the feeling this blog would not be especially spamptious for Planet Haskell. I guess I’ll volunteer it if nobody bleats. Opinions?

Warming Up For Summer

June 23rd, 2009

Implementing the lambda calculus is, along with buying shoes and then not wearing them, one of my worse habits. Here’s today’s cookery, in literate Haskell. I’m using a certain amount of jiggery pokery.


> {-# LANGUAGE GADTs, KindSignatures, EmptyDataDecls,
>     FlexibleInstances, FlexibleContexts
> #-}

I haven’t implemented weak normalization for a while, but Pierre Boutillier and I are currently experimenting with it, reviving my curiosity.


> module Wnf where
> import Control.Applicative
> import Data.Traversable
> import Data.Foldable

Here come all the variations on terms, at once!


> data Form       -- reducible stuff or only normal?
>   = Rdu | Nrm
> data Stage      -- syntax or internal semantic stuff?
>   = Syn Form | Sem
> data Direction  -- checking or inferring types?
>   = Chk | Inf

I’m pretending I have dependent types. It could be worse. Terms are indexed by stage, with syntactic representations for reducible terms and normal forms, and a “semantic” stage for weak normal forms. As ever, I keep parametric in the representation of parameters.


> data Tm :: {-Stage-}* -> {-Direction-}* -> * -> * where

First, the checkable stuff, for all stages.


> -- functions
>   F      :: Bnd k x -> Tm k Chk x
> -- canonical things
>   (:-)   :: Con -> [Tm k Chk x] -> Tm k Chk x
> -- noncanonical things
>   N      :: Tm k Inf x -> Tm k Chk x

Next, the inferrable stuff, likewise.


> -- parameters
>   X      :: x -> Tm k Inf x
> -- eliminations (of noncanonical stuff)
>   (:$)   :: Tm k Inf x -> Elim (Tm k Chk x) -> Tm k Inf x
> -- operations
>   (:@)   :: Op -> [Tm k Chk x] -> Tm k Inf x

Syntactic forms are permitted to use de Bruijn indices.


>   V      :: Int -> Tm (Syn a) Inf x

Syntactic, reducible forms may have definitions and type annotations.


>   (:=:)  ::  Tm (Syn Rdu) Inf x -> Bnd (Syn Rdu) x ->
>              Tm (Syn Rdu) Chk x
>   (:::)  ::  Tm (Syn Rdu) Chk x -> Tm (Syn Rdu) Chk x ->
>              Tm (Syn Rdu) Inf x

Now, what’s a binding? Read the rest of this entry »

Flushing Buffers

June 21st, 2009

I’ve been gigging quite a bit lately, although mostly about cosmetic improvements to Haskell’s fake dependent types. I’ve just webbed up two sets of slides, for your amusement, bemusement, and seemusement. The first is Slicing It, a Haskellification of indexed containers, under new type class IFunctor. The second is Winging It, which has a more example-driven look at IFunctors, and then ponders what IMonads might be. Apologies to Sir Tony Hoare, William Shakespeare, and Peter Hancock.

Recent Scribbling

May 27th, 2009

Here comes the summer. I’ve scribbled a thing or two lately. There’s Let’s see how things unfold, a piece about coinduction. A subsequent Friday afternoon spent in cafés with Nicolas resulted in the beginnings of Grins from my Ripley Cupboard, whose purpose is to document failed design choices, reminding me what was the trouble. There’s plenty more where they came from, and I’m sure this summer will see more.

Whence Orange?

March 6th, 2008

The imminent departure of Doctor No reminds me to think about orange things a little.

As some of you may know, Buddy Holly subcategorizes the unsolved definitions in the proof state as

  • crying, with no value nor any way ever to get one, for things have gone wrong: these are brown
  • waiting, with no value yet, but a definite solution strategy which may yet work: these are yellow
  • hoping, with no value yet, nor any particular process intended to deliver one: these are orange

All holes wish they could have a value and go green. The brown ladies know this will never happen for them, and they also know that it’s your fault. The yellow ladies just wait for the solution strategy (these days, that’s you) to do its stuff. But the orange ladies should show a bit more independence. Let’s think about why and how later. For now, let’s ask ‘Who are these orange ladies anyway?’.

Read the rest of this entry »

Thank goodness for the typechecker

November 28th, 2007

You may have gathered that Epigram 2 features some hardwired universes: enumerations, signatures, datatypes, propositions. The first three of these are themselves coded via datatypes (why stop there, I now ask myself?) and they all involve some fairly hairy looking operators doing rather generic stuff. Correspondingly, we get to hardwire the codes of these datatypes, the types of the operators, and all that stuff. In effect, we’re writing chunks of Epigram in Epigram, but from the inside out. Fortunately, core Epigram 2 now has a functioning typechecker oce more, which means we can tell the machine to typecheck all the hardwired stuff, slightly increasing the probability that we hardwired it right.

I should explain: we’ve been transposing the code so that each universe has its own file, explaining its syntax, typing rules, and so on; some extremely dumb utilities carve up the universe files and slot the relevant pieces into the syntax, the typechecker, ya da ya da. And now, each file of goodies comes with its lump of regression testing, ready to be glommed together and run at kickoff. There’s a word beginning with ‘a’. It’s ‘algebra’.

Tomorrow…oops, later today…it’ll be time to bolt the core that I’ve been refactoring into the proof state navigator that Nicolas and Peter have been building. Manual merge delight. I’ve still got a chunk of stuff to shift before the core is fully operational, but at least I can see the tunnel in the middle of the tunnel.

November Blip

November 8th, 2007

It’s a surprise, I know, but with any luck we might be about to manage a workblip in between all the anonymous refereeing, workshop attendance, workshop organisation, paper polishing, RAE-bashing and running out of money. I know, I know, don’t speak too soon. But a peculiar astrological pattern has brought Nicolas, Morris and myself into an unprecedented degree of availability, even as it takes us to three different countries, so you might spot some darcs activity.

Nicolas is in the business of splitting * into Prop and Set. Prop is closed under False, True, ∧, ∀, =, and later we’ll add coinductive relations. It’s a Tarski-style universe with an explicit Prf : Prop → Set, so it’s straightforward to identify all inhabitants of Prf P definitionally. Pleasingly, the observational equality on propositions expands (P : Prop) = (Q : Prop) as (P => Q) ∧ (Q => P), where P => Q is short for ∀x:Prf P. Q. Trans-European comedy with quotient sets will follow shortly.

Meanwhile, Peter and I are getting back to working on the representation of the proof state. Morally, the proof state is a flat sequence of longnamed fully λ-lifted definitions in various states of repair. The trick, however, is to scrunch this sequence into a tree to improve sharing (of parameters and of name prefixes) and to reflect the hierarchical structure of development. Reports of updates get propagated from global to local; requests for updates get propagated from local to global. It’s life in a dissection, scheduling threads. The amusing thing is the way this process fits a with distributed computing model, whether we want it to or not.

Meaningless Scrawlings

April 17th, 2007

Enum := data {} where <nil@[]> <cons(u : UId; E : @ [])@[]> @[]

Sig := data {} where <nil@[]> <cons(u : UId; S : *; Sg : S → @ [])@[]> @[]

Data := \I : Sig => data {} where
  <out(i : sig I)@[]>
  <arg(u : UId; A : *; D : A → @[])@[]>
  <rec(u : UId; i : sig I; D : @[])@[]>
  <hrec(u : UId; H : *; i : H → sig I; D : @[])@[]>
  @[]

These apparently meaningless scrawlings have just been eaten. Only by the parser, so far, but the right terms did seem to emerge from the other side. If I tell you that @ stands as a placeholder for the type being recursively defined by data, and that if Σ : Sig is a code, then sig Σ is the corresponding signature type, you might almost be able to tell that they are the datatypes Enum, Sig and Data I, of codes for enumerations, signatures and datatypes indexed by signature I, respectively. The relevant codes emerge successfully from the parser. Terribly circular, at the moment, but quite stratifiable.
Read the rest of this entry »

Edinburgh Hacking

March 29th, 2007

The observant among you may have noticed a flurry of darcs activity of late. That’ll be Nicolas and me hiding in Edinburgh (courtesy of Dr P Hancock and Dr K Tourlas) and trying to knock this thing back into shape. It’s coming on. The design document is a bit of a mess. I’ve added a new chapter, just to hold a dump of what we’re doing. Serious editorialisation is required. As for technical details of what’s happening: more later.

OTT, Cough

February 9th, 2007

I’m not very well at the moment, but I do get bored very easily. So I’ve made a thing. It’s an OTT piglet for us all to play with. It isn’t terribly exciting yet: I only have Pi and enumerations. I’m hoping to add more, real soon.

What you get is syntax, semantics, evaluation, equality testing, quotation and typechecking. In that order. There’s a rather crude test rig which lets you type definitions as could-be-worse Haskell expressions. I can’t even be bothered to write a parser or a printer.

The nice thing is that you can kind of see the connection between Wouter’s Agda development and this lump of Haskell. It’s chopped up the same way.

I’m plotting my next move. Something involving containers, perhaps…

Less strange, perhaps true

January 30th, 2007

Recent correspondence on the mailing list got me to thinking about OTT, proof irrelevance and what have you. I think, but am not yet sure, that we may be able to avoid deciding conversion during evaluation after all. This would really clean things up very nicely if true. What have we?
Read the rest of this entry »

Idiomatica.lhs

December 9th, 2006

Just thought I’d make a quick link to this pdf, generated from this module, which (hopefully) improves on Idiomatics.lhs in various ways. In particular, there’s a bit of a monad transformer library which seems promising…

Refactoring Value.lhs

November 28th, 2006

Returning to a project after a while away always provokes a feeling of revulsion. We made some progress back then, but we also made a mess. It’s that (in my case, weekly) feeling when you come back from a trip and remember that you didn’t have time to do the dishes before you left.

The target in my sights just now is Value.lhs, which I won’t link to because it’s gross. It should split into a few layers: data representation, computation and conversion, eval, quote. I’ve added new files into which I’ve started distributing the contents of the old ones. A bit more documentation is happening as I go.

I’m also keen to clean up the idiomatic structure of all of these things, so that there’s more abstraction over ‘sufficient capability’ and less lifting between specific idioms.

It’s not as hard as we thought!

November 23rd, 2006

Peter and I had another bash at the universe of datatypes. Seems we don’t need to work so hard to get something off the ground. We’re trying to see if we can keep it simple.

Peter amd Conor think it out again (and again)

Of course, we still need the higher-order case, but it’s not so hard. Hopefully we’ll iron out the wrinkles and put something more formal in the design doc shortly. I’m mostly posting this so I can clean the board.

<=>

November 20th, 2006

Frustrated by over a month of being too busy to work on Epigram 2, I’m hiding in Wales and hacking. As part of the next move towards observational equality, I’ve added a heterogeneous operator (a:A) <=> (b:B) which computes to a type packaging the appropriate premises from which (a:A) = (b:B) might be concluded. I’m not totally sure it’s wise to do so, but we’ll see. It’s certainly fun to play with. I guess I’ll write up some more formal details shortly.

Epigram on Carbon XEmacs

October 22nd, 2006

Was incidentally poking about and tripped over Andrew Choi’s Carbon XEmacs port. Have downloaded and built it (on both Intel and PPC macs) and it seems to run Epigram ok. Quite pretty, in fact.

The only thing I’m stumped by is how to persude it to look for fonts in the right place.

If you’re using a Mac, it’s worth a go…

New Year, New Time, New Faces, New Term

October 12th, 2006

First Epigram meeting of the New Year saw all of the above, and in an attempt to get as many of the faces in a position to hack Epigram2. We started at the start. By the end of the hour the black-board carried a (PTS) fragment of the new representation of terms that separates into two syntactic categories — the types that have types that can be inferred and the types that can only be checked in a known type:

121006bb2

121006bb1

Moderate Badness

September 19th, 2006

I’ve just committed the blasphemy against the gods of version control that is sticking a generated binary in the repo. But we could use an up-to-date binary of the design doc on line, so I think it’s a justified and equitable squalid little hack. Please feel free to suggest a better way of doing this. (This is one respect in which our old makey-makey cvs regime was slightly better, I’m afraid.)

I’ve added the beginnings of a story about equality, but we haven’t got to the good bit yet.

*UPDATED: link to Epigram2.pdf above changed (see comment below) – pwm

Next Few Days

August 27th, 2006

Before we go much further, we need to tidy up what we now have. James is set to rationalise our implementation of tuples; Peter is on the case of datatypes. Meanwhile, I’m reorganising the module structure to make Ecce less monolithic, and I’m trying to write up what’s going on there. Key objective: make sense to Edwin. We should be generating run-time objects from Ecce modules sooner rather than later. Certainly, there’s no need to wait for the elaborator before building the back end.

I’m also keen to get my head round the stuff that Joel wrote. I’m reasonably familiar with the rendering library he put together, but I need to get to know the general gtk2hs application setup a bit better. I think I should try to get into that stuff and build some more kit. (I’m hoping Duncan wouldn’t mind if I wrote a general-purpose 2d layout library, like the one I already wrote for ascii text.)