<?xml version="1.0" encoding="utf-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments for Epilogue</title>
	<atom:link href="http://www.e-pig.org/epilogue/?feed=comments-rss2" rel="self" type="application/rss+xml" />
	<link>http://www.e-pig.org/epilogue</link>
	<description>for Epigram</description>
	<lastBuildDate>Sun,  6 Nov 2011 09:25:45 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.4</generator>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
		<item>
		<title>Comment on Problem Types, Dub Types, Type Schemes by Conor</title>
		<link>http://www.e-pig.org/epilogue/?p=1079&#038;cpage=1#comment-81297</link>
		<dc:creator>Conor</dc:creator>
		<pubDate>Sun, 06 Nov 2011 09:25:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1079#comment-81297</guid>
		<description>In a formal sense, dub types are unit types, so their introduction is trivial and their elimination isn&#039;t interesting. Even so, the rule for elaborating the application of a function symbol f to a spine ss searches through the dubbing assumptions to figure out what f is supposed to mean. Their phantom presence makes a difference.

Entertainingly, dub types fit elimination-with-a-motive very nicely. The dub types in a problem get refined along with everything else, leaving the same identifiers in scope, but with more informative definitions. That way, we can reliably stack up a sequence of refinement strategies and be sure that the later strategies still make sense in the scope resulting from the earlier strategies. Once refinement&#039;s done, the business of elaborating the next &quot;left-hand side&quot; is to transform the problem into an equivalent one with more dub types inserted, in accordance with the programmer&#039;s use of pattern variables.

We used to have much more ghastly ways to work on this problem. Epigram 1 relied on choosing a particular &#945;-variant of the goal to make the names in scope fit the programmer&#039;s choices. That such an approach violates several of my own principles about meaning and naming should have been a warning to me earlier...</description>
		<content:encoded><![CDATA[<p>In a formal sense, dub types are unit types, so their introduction is trivial and their elimination isn&#8217;t interesting. Even so, the rule for elaborating the application of a function symbol f to a spine ss searches through the dubbing assumptions to figure out what f is supposed to mean. Their phantom presence makes a difference.</p>
<p>Entertainingly, dub types fit elimination-with-a-motive very nicely. The dub types in a problem get refined along with everything else, leaving the same identifiers in scope, but with more informative definitions. That way, we can reliably stack up a sequence of refinement strategies and be sure that the later strategies still make sense in the scope resulting from the earlier strategies. Once refinement&#8217;s done, the business of elaborating the next &#8220;left-hand side&#8221; is to transform the problem into an equivalent one with more dub types inserted, in accordance with the programmer&#8217;s use of pattern variables.</p>
<p>We used to have much more ghastly ways to work on this problem. Epigram 1 relied on choosing a particular &alpha;-variant of the goal to make the names in scope fit the programmer&#8217;s choices. That such an approach violates several of my own principles about meaning and naming should have been a warning to me earlier&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Problem Types, Dub Types, Type Schemes by Andrea Vezzosi</title>
		<link>http://www.e-pig.org/epilogue/?p=1079&#038;cpage=1#comment-81280</link>
		<dc:creator>Andrea Vezzosi</dc:creator>
		<pubDate>Sat, 05 Nov 2011 07:16:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1079#comment-81280</guid>
		<description>Neat, it&#039;s nice to know i wasn&#039;t confused.

By the way, is there something to say on how the dubbing assumptions (’x’ : S = s) get used? The post only shows their introduction.</description>
		<content:encoded><![CDATA[<p>Neat, it&#8217;s nice to know i wasn&#8217;t confused.</p>
<p>By the way, is there something to say on how the dubbing assumptions (’x’ : S = s) get used? The post only shows their introduction.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Problem Types, Dub Types, Type Schemes by Conor</title>
		<link>http://www.e-pig.org/epilogue/?p=1079&#038;cpage=1#comment-81262</link>
		<dc:creator>Conor</dc:creator>
		<pubDate>Fri, 04 Nov 2011 08:15:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1079#comment-81262</guid>
		<description>Andrea, you&#039;re quite right, I did. I was half asleep by then. Fixed it now. Thanks</description>
		<content:encoded><![CDATA[<p>Andrea, you&#8217;re quite right, I did. I was half asleep by then. Fixed it now. Thanks</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Problem Types, Dub Types, Type Schemes by Andrea Vezzosi</title>
		<link>http://www.e-pig.org/epilogue/?p=1079&#038;cpage=1#comment-81261</link>
		<dc:creator>Andrea Vezzosi</dc:creator>
		<pubDate>Fri, 04 Nov 2011 07:25:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1079#comment-81261</guid>
		<description>&quot;Sch S ’s’ : Set if S : Scheme and s is an inferrable term
[s] : Sch S ’s’ if s : El S&quot;

did you mean &quot;checkable term&quot; instead?</description>
		<content:encoded><![CDATA[<p>&#8220;Sch S ’s’ : Set if S : Scheme and s is an inferrable term<br />
[s] : Sch S ’s’ if s : El S&#8221;</p>
<p>did you mean &#8220;checkable term&#8221; instead?</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Lexical Structure by Adam M</title>
		<link>http://www.e-pig.org/epilogue/?p=1058&#038;cpage=1#comment-79873</link>
		<dc:creator>Adam M</dc:creator>
		<pubDate>Fri, 05 Aug 2011 01:12:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1058#comment-79873</guid>
		<description>&quot;2D nonsense rides again!&quot; -- wow, that is a simple and nifty idea for improving legibility!  And without resorting to whitespace-sensitivity.  Too bad it&#039;s buried half-way in the middle of the blog post (which might cause people to skip it).  

If Coq supported that, I&#039;d use it.  But I won&#039;t hold my breath :)

It would also be nice if you could have multiple rules lined up horizontally, though that would necessitate using a different character for the left and right sides (rather than &quot;!&quot; for both).

I know it sounds silly, but there&#039;s something productivity-enhancing about being able to fit all of your type system&#039;s rules on the screen at once.</description>
		<content:encoded><![CDATA[<p>&#8220;2D nonsense rides again!&#8221; &#8212; wow, that is a simple and nifty idea for improving legibility!  And without resorting to whitespace-sensitivity.  Too bad it&#8217;s buried half-way in the middle of the blog post (which might cause people to skip it).  </p>
<p>If Coq supported that, I&#8217;d use it.  But I won&#8217;t hold my breath :)</p>
<p>It would also be nice if you could have multiple rules lined up horizontally, though that would necessitate using a different character for the left and right sides (rather than &#8220;!&#8221; for both).</p>
<p>I know it sounds silly, but there&#8217;s something productivity-enhancing about being able to fit all of your type system&#8217;s rules on the screen at once.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Design: Framing Source Code by Conor</title>
		<link>http://www.e-pig.org/epilogue/?p=1043&#038;cpage=1#comment-79841</link>
		<dc:creator>Conor</dc:creator>
		<pubDate>Wed, 03 Aug 2011 01:12:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1043#comment-79841</guid>
		<description>Thanks for taking the trouble to write this.

I&#039;m expecting that most reasonably customizable editors would allow us to spawn an elaborator on the current buffer contents in the background, then merge the changes made by the elaborator with the changes you&#039;ve made in the meantime, marking conflicts. Just like the elaborator is your annoying coauthor!

Reloading from scratch each time is the easiest thing to get right, but we have been known to make &quot;local undo&quot; work. The logic&#039;s quite tricky: definitions are either Unchanged, Improved, Degraded or Deleted, in increasing order of badness. The worse the news, the harder you have to scrutinize downward dependencies. That way, you can start from the textual damage to the source file and compute how much of a hole it blows in the proof state. It&#039;s worth noting that a construction always amounts to a pair of definitions (T : Set, t : T), which can&#039;t be knocked out by bad news from elsewhere (because Set is always ok, and if T is a Set, it&#039;s reasonable to define a t : T). At a crude level, then, the worst that happens is that we figure out which constructions are downstream of stuff that has changed and we re-elaborate only them. (But is it worth the trouble?)

Elaboration in parallel is very much on the thought-agenda. The elaborator is a kind of multithreaded operating system: it could very well run multithreadedly. A crucial property (sometimes hard to ensure, and not true in Agda or Coq) is that refinement of solutions respects refinement of problems. You can kick off an elaboration problem that isn&#039;t entirely defined and be sure that whatever progress you can make on that basis will not be regretted when the problem becomes more defined. Representing incomplete definitions gives us the basis of a non-blocking read.

Modules, at least in the weak sense of file-and-naming structure, will not be long away. The proof state has a modular structure; the source language will piggyback on that. As for first-class modules, we&#039;re in the fabulous position that (i) all constructions are definitions (ii) (needs implementation, but the result is due to Nicolas Oury) we can always turn a definition into an abstraction with a propositional equation. Representation abstraction needs more thought...

Exports... In dependently typed languages, definitions don&#039;t go out of scope while anything depends on them. While it should be up to the importing module to decide which imported definitions get local short names, everything&#039;s available in principle, so it may be helpful to prepare a candidate import list that isn&#039;t full of incidental crap.

More source language syntax coming soon, and the punctuation details are always up for grabs, but I&#039;m working from the outside in. I have a prejudice about layout (which forces a certain simplicity): it should visibly work in a proportionally spaced font. The question is what cues indentation.

It is, of course, difficult to please. When things are new and strange, it&#039;s a mistake to choose an ill-fitting but familiar-looking notation. Better to be clear that something odd is happening. I expect there&#039;ll be a bit of that.</description>
		<content:encoded><![CDATA[<p>Thanks for taking the trouble to write this.</p>
<p>I&#8217;m expecting that most reasonably customizable editors would allow us to spawn an elaborator on the current buffer contents in the background, then merge the changes made by the elaborator with the changes you&#8217;ve made in the meantime, marking conflicts. Just like the elaborator is your annoying coauthor!</p>
<p>Reloading from scratch each time is the easiest thing to get right, but we have been known to make &#8220;local undo&#8221; work. The logic&#8217;s quite tricky: definitions are either Unchanged, Improved, Degraded or Deleted, in increasing order of badness. The worse the news, the harder you have to scrutinize downward dependencies. That way, you can start from the textual damage to the source file and compute how much of a hole it blows in the proof state. It&#8217;s worth noting that a construction always amounts to a pair of definitions (T : Set, t : T), which can&#8217;t be knocked out by bad news from elsewhere (because Set is always ok, and if T is a Set, it&#8217;s reasonable to define a t : T). At a crude level, then, the worst that happens is that we figure out which constructions are downstream of stuff that has changed and we re-elaborate only them. (But is it worth the trouble?)</p>
<p>Elaboration in parallel is very much on the thought-agenda. The elaborator is a kind of multithreaded operating system: it could very well run multithreadedly. A crucial property (sometimes hard to ensure, and not true in Agda or Coq) is that refinement of solutions respects refinement of problems. You can kick off an elaboration problem that isn&#8217;t entirely defined and be sure that whatever progress you can make on that basis will not be regretted when the problem becomes more defined. Representing incomplete definitions gives us the basis of a non-blocking read.</p>
<p>Modules, at least in the weak sense of file-and-naming structure, will not be long away. The proof state has a modular structure; the source language will piggyback on that. As for first-class modules, we&#8217;re in the fabulous position that (i) all constructions are definitions (ii) (needs implementation, but the result is due to Nicolas Oury) we can always turn a definition into an abstraction with a propositional equation. Representation abstraction needs more thought&#8230;</p>
<p>Exports&#8230; In dependently typed languages, definitions don&#8217;t go out of scope while anything depends on them. While it should be up to the importing module to decide which imported definitions get local short names, everything&#8217;s available in principle, so it may be helpful to prepare a candidate import list that isn&#8217;t full of incidental crap.</p>
<p>More source language syntax coming soon, and the punctuation details are always up for grabs, but I&#8217;m working from the outside in. I have a prejudice about layout (which forces a certain simplicity): it should visibly work in a proportionally spaced font. The question is what cues indentation.</p>
<p>It is, of course, difficult to please. When things are new and strange, it&#8217;s a mistake to choose an ill-fitting but familiar-looking notation. Better to be clear that something odd is happening. I expect there&#8217;ll be a bit of that.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Design: Framing Source Code by pozorvlak</title>
		<link>http://www.e-pig.org/epilogue/?p=1043&#038;cpage=1#comment-79837</link>
		<dc:creator>pozorvlak</dc:creator>
		<pubDate>Tue, 02 Aug 2011 22:42:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=1043#comment-79837</guid>
		<description>Interesting stuff. Here are my comments, for whatever they&#039;re worth:

&lt;i&gt; You’re using your favourite editor, not something I’ve chosen for you then banjaxed specially... It’s a straightforward file transducer, whose primary mode is source-in-source-out... We don’t have a read-eval-print-loop.&lt;/i&gt;

So your basic cycle is &quot;save, run transducer, reload file&quot;, perhaps with changes highlighted by your editor in some way? I think a scratch evaluator would still be useful (I found the normaliser very useful in your Agda course), but maybe not - in the same way that a saved, re-runnable test suite is better than evaluating expressions at a REPL.

&lt;i&gt;A useful secondary mode is to keep the system live and maintain an elaboration state in sync with the document: it is then sufficient to trade patches.&lt;/i&gt;

Are you thinking of just re-running the transducer from scratch on the patched document, or of maintaining persistent structures and altering them as needed? The latter would have obvious advantages for programming in the large, but strikes me as hard to get right - I know the Factor team had to do a lot of work to get this to work properly.

&lt;i&gt;The glory of lhs2TeX has taught us that it is important to embed programs and program fragments in other documents, especially LaTeX files, allowing the very code that is rendered to be checked.&lt;/i&gt;

I think K&amp;R got there first :-)

&lt;i&gt;It should also be possible to mark a definition as something to be elaborated but not brought into scope: we should be able to discuss candidate definitions for something and check that those candidates make sense without necessarily committing to them.&lt;/i&gt;

See also: usage examples embedded in documentation. There&#039;s even a &lt;a href=&quot;http://docs.python.org/library/doctest.html&quot; rel=&quot;nofollow&quot;&gt;Python testing library&lt;/a&gt; based around this idea.

&lt;i&gt;every comment sits in a scope, so that escaped expressions (probably) written &#124;expression&#124; can be checked.&lt;/i&gt;

Whoa. Cool.

&lt;i&gt;with &#124;&#124; signalling end-of-line comments.&lt;/i&gt;

You just freaked out every C programmer on the planet :-)

&lt;i&gt;So far, it looks like we have three kinds of construction: datatypes (keyword ‘data’, functions (keyword ‘let’), and proofs (keyword ‘lemma’).&lt;/i&gt;

I like this. Simple and clean.

&lt;i&gt; I’m just trying to give an impression of a source language for proofs where one expresses proof strategies and readily verifiable consequences&lt;/i&gt;

Looks good to me. I guess you&#039;d need more complicated examples to nail the syntax down, though.

&lt;i&gt;A simple topological sort should be sufficient to yield a suitable elaboration order&lt;/i&gt;

This is an implementation detail (one could also imagine, say, bottom-up elaboration in parallel): the hard part is detecting the dependencies (or rather, making them unambiguously detectable). I like your brutal solution, precisely because it&#039;s brutal: simple, memorable rules are generally easier to work with than subtle heuristics that work 99.9% of the time but go wrong in mind-bending edge cases.

&lt;i&gt;Not that I want to get into module system design just now&lt;/i&gt;

Don&#039;t put it off for too long: the more I program, the more convinced I become that providing a good-enough module system design is among the most important tasks of a language designer (he says, with precisely zero language designs under his belt). Leave it too late and you end up with #include. You want to provide enough of a module system that programmers feel no need to roll their own, otherwise your community will either fracture or never develop a culture of code-sharing. IMHO, you want to have a Comprehensive Epigram Archive Network up and running as soon as possible (could you piggyback one on Hackage? If not, let me offer my services as a hosting provider).

Plus, I think there&#039;s probably scope to do really interesting things with modules in dependently-typed languages. OCaml&#039;s functors become simple functions from types to tuples of types, but is it possible to write pragmas in user code?

&lt;i&gt;that would suggest using longnames by default for stuff in imported modules, with a manual override allowing imports to name the things which should be usable by short names.&lt;/i&gt;

This is considered best practice in the Perl community: it seems to work pretty well. Apart from anything else, it allows you to find where a name comes from by simply searching the current file with your editor.

&lt;i&gt;How do we manage exports?&lt;/i&gt;

I&#039;m not sure I understand the problem. Perl modules are expected to provide two arrays, @EXPORT and @EXPORT_OK: the former is all the symbols that are exported by default (best practice is for this to be empty), and the latter is all the symbols that can be exported if requested by client code (this mechanism is actually implemented in userspace code: see &lt;a href=&quot;http://perldoc.perl.org/Exporter.html/&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;). I guess this mechanism doesn&#039;t make so much sense for a statically-typed language, but the general idea of having two lists should work.

&lt;i&gt;I haven’t said anything about layout yet&lt;/i&gt;

Again, I&#039;d suggest - no, beg - that you keep this brutally simple.</description>
		<content:encoded><![CDATA[<p>Interesting stuff. Here are my comments, for whatever they&#8217;re worth:</p>
<p><i> You’re using your favourite editor, not something I’ve chosen for you then banjaxed specially&#8230; It’s a straightforward file transducer, whose primary mode is source-in-source-out&#8230; We don’t have a read-eval-print-loop.</i></p>
<p>So your basic cycle is &#8220;save, run transducer, reload file&#8221;, perhaps with changes highlighted by your editor in some way? I think a scratch evaluator would still be useful (I found the normaliser very useful in your Agda course), but maybe not &#8211; in the same way that a saved, re-runnable test suite is better than evaluating expressions at a REPL.</p>
<p><i>A useful secondary mode is to keep the system live and maintain an elaboration state in sync with the document: it is then sufficient to trade patches.</i></p>
<p>Are you thinking of just re-running the transducer from scratch on the patched document, or of maintaining persistent structures and altering them as needed? The latter would have obvious advantages for programming in the large, but strikes me as hard to get right &#8211; I know the Factor team had to do a lot of work to get this to work properly.</p>
<p><i>The glory of lhs2TeX has taught us that it is important to embed programs and program fragments in other documents, especially LaTeX files, allowing the very code that is rendered to be checked.</i></p>
<p>I think K&amp;R got there first :-)</p>
<p><i>It should also be possible to mark a definition as something to be elaborated but not brought into scope: we should be able to discuss candidate definitions for something and check that those candidates make sense without necessarily committing to them.</i></p>
<p>See also: usage examples embedded in documentation. There&#8217;s even a <a href="http://docs.python.org/library/doctest.html" rel="nofollow">Python testing library</a> based around this idea.</p>
<p><i>every comment sits in a scope, so that escaped expressions (probably) written |expression| can be checked.</i></p>
<p>Whoa. Cool.</p>
<p><i>with || signalling end-of-line comments.</i></p>
<p>You just freaked out every C programmer on the planet :-)</p>
<p><i>So far, it looks like we have three kinds of construction: datatypes (keyword ‘data’, functions (keyword ‘let’), and proofs (keyword ‘lemma’).</i></p>
<p>I like this. Simple and clean.</p>
<p><i> I’m just trying to give an impression of a source language for proofs where one expresses proof strategies and readily verifiable consequences</i></p>
<p>Looks good to me. I guess you&#8217;d need more complicated examples to nail the syntax down, though.</p>
<p><i>A simple topological sort should be sufficient to yield a suitable elaboration order</i></p>
<p>This is an implementation detail (one could also imagine, say, bottom-up elaboration in parallel): the hard part is detecting the dependencies (or rather, making them unambiguously detectable). I like your brutal solution, precisely because it&#8217;s brutal: simple, memorable rules are generally easier to work with than subtle heuristics that work 99.9% of the time but go wrong in mind-bending edge cases.</p>
<p><i>Not that I want to get into module system design just now</i></p>
<p>Don&#8217;t put it off for too long: the more I program, the more convinced I become that providing a good-enough module system design is among the most important tasks of a language designer (he says, with precisely zero language designs under his belt). Leave it too late and you end up with #include. You want to provide enough of a module system that programmers feel no need to roll their own, otherwise your community will either fracture or never develop a culture of code-sharing. IMHO, you want to have a Comprehensive Epigram Archive Network up and running as soon as possible (could you piggyback one on Hackage? If not, let me offer my services as a hosting provider).</p>
<p>Plus, I think there&#8217;s probably scope to do really interesting things with modules in dependently-typed languages. OCaml&#8217;s functors become simple functions from types to tuples of types, but is it possible to write pragmas in user code?</p>
<p><i>that would suggest using longnames by default for stuff in imported modules, with a manual override allowing imports to name the things which should be usable by short names.</i></p>
<p>This is considered best practice in the Perl community: it seems to work pretty well. Apart from anything else, it allows you to find where a name comes from by simply searching the current file with your editor.</p>
<p><i>How do we manage exports?</i></p>
<p>I&#8217;m not sure I understand the problem. Perl modules are expected to provide two arrays, @EXPORT and @EXPORT_OK: the former is all the symbols that are exported by default (best practice is for this to be empty), and the latter is all the symbols that can be exported if requested by client code (this mechanism is actually implemented in userspace code: see <a href="http://perldoc.perl.org/Exporter.html/" rel="nofollow">here</a>). I guess this mechanism doesn&#8217;t make so much sense for a statically-typed language, but the general idea of having two lists should work.</p>
<p><i>I haven’t said anything about layout yet</i></p>
<p>Again, I&#8217;d suggest &#8211; no, beg &#8211; that you keep this brutally simple.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Crude but Effective Stratification by Conor</title>
		<link>http://www.e-pig.org/epilogue/?p=857&#038;cpage=1#comment-79220</link>
		<dc:creator>Conor</dc:creator>
		<pubDate>Wed, 29 Jun 2011 16:43:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=857#comment-79220</guid>
		<description>No, but it has occurred to me that it might be interesting to think in that direction, not least because it would be an excellent excuse to visit my old teacher, Tom Forster. There&#039;s nothing about these rules which suggests that there has to be a bottom universe. Things might shift downwards as easily as they shift up.

That said, the usual inductive-recursive method for making models of predicative type theories might struggle without an inductive characterization of the hierarchy. I wonder what we&#039;d do instead...</description>
		<content:encoded><![CDATA[<p>No, but it has occurred to me that it might be interesting to think in that direction, not least because it would be an excellent excuse to visit my old teacher, Tom Forster. There&#8217;s nothing about these rules which suggests that there has to be a bottom universe. Things might shift downwards as easily as they shift up.</p>
<p>That said, the usual inductive-recursive method for making models of predicative type theories might struggle without an inductive characterization of the hierarchy. I wonder what we&#8217;d do instead&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Crude but Effective Stratification by Mathnerd314</title>
		<link>http://www.e-pig.org/epilogue/?p=857&#038;cpage=1#comment-79121</link>
		<dc:creator>Mathnerd314</dc:creator>
		<pubDate>Thu, 23 Jun 2011 15:21:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=857#comment-79121</guid>
		<description>Have you looked at implementing &lt;a href=&quot;http://en.wikipedia.org/wiki/New_Foundations&quot; rel=&quot;nofollow&quot;&gt;NFU&lt;/a&gt;?</description>
		<content:encoded><![CDATA[<p>Have you looked at implementing <a href="http://en.wikipedia.org/wiki/New_Foundations" rel="nofollow">NFU</a>?</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Insertion Sort by Conor</title>
		<link>http://www.e-pig.org/epilogue/?p=690&#038;cpage=1#comment-78437</link>
		<dc:creator>Conor</dc:creator>
		<pubDate>Mon, 16 May 2011 14:42:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.e-pig.org/epilogue/?p=690#comment-78437</guid>
		<description>Well spotted. I&#039;ve fixed the informal notation to reflect the actual code.

Meanwhile, I chose to add &#039;bot and &#039;top, and to bound the lists at both ends, because these ordered lists are but one instance of a general scheme for constructing ordered data structures. It&#039;s certainly possible to define ordered cons-lists with only a lower bound, but binary search trees require both bounds.

The grammar F ::= 1 &#124; F+F &#124; F[]F &#124; rec determines a class of functors whose fixpoints are readily orderable, given some notion of element X with order &lt;=

[[_]] : F -&gt; (X -&gt; X -&gt; Set) -&gt; (X -&gt; X -&gt; Set)
[[ 1 ]] R l u = :- l &lt;= u
[[ F + F&#039; ]] R l u = [[ F ]] R l u + [[ F&#039; ]] R l u
[[ F [] F&#039; ]] R l u = (x : X) * [[ F ]] R l x * [[ F&#039; ]] R x u
[[ Rec ]] R l u = R l u

The idea is that everywhere there&#039;s a pair, there&#039;s a pivot. Ordered lists are given by the code 1 + (1 [] Rec). Binary search trees are given by the code 1 + (Rec [] Rec).</description>
		<content:encoded><![CDATA[<p>Well spotted. I&#8217;ve fixed the informal notation to reflect the actual code.</p>
<p>Meanwhile, I chose to add &#8216;bot and &#8216;top, and to bound the lists at both ends, because these ordered lists are but one instance of a general scheme for constructing ordered data structures. It&#8217;s certainly possible to define ordered cons-lists with only a lower bound, but binary search trees require both bounds.</p>
<p>The grammar F ::= 1 | F+F | F[]F | rec determines a class of functors whose fixpoints are readily orderable, given some notion of element X with order < =</p>
<p>[[_]] : F -> (X -> X -> Set) -> (X -> X -> Set)<br />
[[ 1 ]] R l u = :- l <= u<br />
[[ F + F' ]] R l u = [[ F ]] R l u + [[ F' ]] R l u<br />
[[ F [] F&#8217; ]] R l u = (x : X) * [[ F ]] R l x * [[ F' ]] R x u<br />
[[ Rec ]] R l u = R l u</p>
<p>The idea is that everywhere there&#8217;s a pair, there&#8217;s a pivot. Ordered lists are given by the code 1 + (1 [] Rec). Binary search trees are given by the code 1 + (Rec [] Rec).</p>
]]></content:encoded>
	</item>
</channel>
</rss>
