programming (and other) musings

Posts tagged "programming":

29 Dec 2020

what's not to like

I've just discovered Codeberg, a code hosting site that, finally, has let me create a user with plain emacs-w3m, shows me content reasonably well there, with a refreshingly uncluttered layout, handles graciously org files (why, it's even generating a table of contents for me), has a good privacy policy, it's not under the wings of any corporation and had my preferred username free for grabs.

So i'm definitely testing the waters for it to become my default site for (newish) published code: here's my jao/elibs: Emacs libraries and little utilities repository for starters (and, if you look closely, you'll find also my emacs config nearby). If and when that happens, they're going to to get my humble contribution too, i reckon.

Tags: programming emacs
31 Oct 2020

two decades of gnu mdk

I've just published GNU MDK 1.3.0, its 28th release, which finally migrates MDK's graphical user interface to GTK+ 3, to keep up with the, ahem, not-so-modern times and see to it that MDK keeps alive for at least another decade or two.


Twenty years ago on this day, the first version of MDK 0.1 was released. Back then, it didn't have an Emacs or graphical interface1, support for internationalization or any integration with Guile, and the debugger was really bare bones (it still is, but not that much). It wasn't yet a GNU package. But all those things, and then more, came in rapid succession, as i used the project to discover the Free Software world, both at a technical and a human level.

As a physicist out of grad school, i wasn't young, but i was just starting in the world of software and everything was fresh and new and exciting. I remember the thrill of receiving emails from total strangers willing to help and offering insightful advice (hi there, Philip King, wherever you are these days), and of thinking that someone else, out there, was finding my little program fun and useful2. Or of exchanging messages with RMS, that guy i had read about in Hackers and admired so much; and then see in the process MDK entering the GNU umbrella. And of seeing it shortly afterwards on Knuth's own page on TAOCP, up there in the MIXware list.

MDK was my first project developed entirely in Emacs running on Debian (twenty years ago, that felt bolder than it sounds now), two constant companions up to ths day. Seeing it as a maintained Debian package was another big satisfaction and milestone for memory lane.

I also have vivid recollections of the mind-bending experience of discovering Scheme, because i learnt that Guile was the extension language of choice for GNU, and my baby steps on writing a lexer or an interpreter, and how humbled i felt when i took a proper compilers course a few years later and looked back at my clumsily rediscovered wheels3.

MDK was also my gateway to publishing a book for the first time, thanks to the nice guys of the GNU Press, with a polished version of its manual. Okay, it's more of a booklet, and out of press by now, but it was an enriching experience nonetheless; for instance i got a chance of seeing an editor in action.

mdk.jpg mdk-source.png

After two decades, having grown older and bitter, all those things look small and of little importance, but as a freshman i truly had the best of times. I think that maybe the most important thing i learned was to collaborate with other people, and i've since been always very fond of the kindness of strangers, and how much one can learn from them.

Anyway, here's to the next twenty years. Happy hacking!



To be honest, i almost never use MDK's graphical interface, and prefer to comfortably use it within emacs, but a bit of eye candy is not bad every now and then.


Keep in mind that those were ancient times, more so on my side of th world: the cool new thing to learn and use for me was CVS, and there were rumors about something called subversion. And then SourceForge felt like the best thing ever.


Shortly after, SICP would be the eye-opener it's been for many and would make everything make sense (i talked a bit about that process in one of my very first blog posts). In retrospect, i think the journey that i then started is the main reason i never went on to complete an MMDK for MMIX, which was planned and going to be written in OCaml.

Tags: programming
08 Aug 2020

xmobar: a battery trick

i've been maintaining xmobar for more than a decade now, and i still use it daily and tweak it almost as often. With almost a hundred contributors besides myself, and many bugs to solve, i am always learning new things. The latest one, that font awesome thing everyone seems so fond of.

i just decided to add a bit more icons to my configuration, and, in particular, to have a battery monitor that displays graphically an approximation of time left.

For that, you specify the new font as one of the additional fonts:

additionalFonts = ["xft:FontAwesome-9"]

Then, find a list of glyphs that represent a charging battery (the ones between 0xf240 and 0xf244 work well), and assign that to the -f flag (the foreground of horizontal bars) with -W (the display width) set to 0, and use it in your battery monitor template, which should include somewhere <leftbar>; for instance:

BatteryN ["BAT0"]
         ["-t", "<acstatus>"
         , "-S", "Off", "-d", "0", "-m", "3"
         , "-L", "10", "-H", "90", "-p", "3"
         , "-W", "0"
         , "-f", "\xf244\xf243\xf243\xf243\xf242\xf242\xf242\xf241\xf241\xf240"
         , "--"
         , "-P"
         , "-a", "notify-send -u critical 'Battery running out!!!!!!'"
         , "-A", "5"
         , "-i", "<fn=1>\xf1e6</fn>"
         , "-O", "<fn=1><leftbar>  \xf1e6</fn> <timeleft>"
         , "-o", "<fn=1><leftbar></fn1> <timeleft>"
         , "-H", "10", "-L", "7"
         ] 50 "batt0"

where 0xf1e6 displays a little plug. here's a peek at the corner of my desktop with a charging battery:


The only thing we needed to enable this little feature was to implement indexing on the foreground bar string when the special value 0 is passed, a really small change codewise. Moroever, the feature is generic and applies to any horizontal bar in any monitor, so one can play similar tricks in many places.

This feature will be available in the forthcoming xmobar 0.36 release, but you can of course just grab it from its master branch.

Tags: programming xmobar
13 May 2020


For years, i've been using C-x p, C-x o and C-c <n> to move to other windows, but with ace window i am substituting all of them with M-o. Problem is, muscle memory interferes and i find myself clumsily moving around (and often lost) with the former ones. Or i did, before i followed an advice from Stefan Monnier in emacs-devel: unbind those keys you want to forget, and you'll get an error when you relapse.

In my case, that was just

(global-set-key (kbd "C-x o") nil)

together with commeting out the other one's definitions. For the record, in Stefan's case, he was trying to remember to use the newer C-h o (symbol help) instead of C-h f (function help) or C-h v (variable help), so he unbound the last two.

After a while, when one's muscles have forgotten, one can re-enable the old bindings, for the few cases where they're justified.

Tags: emacs programming
27 Feb 2020


                       ((I m a g i n e)
               (((Imagine there's no FORTRAN)
                   (It's easy if you try)
           (No SML below us) (Above us only Y)
          (Imagine all              the people)
         (Living for                their Chez))
      ((Imagine there's          no memory leaks)
                             (It isn't hard to do)
                              (Nothing to malloc(3)
                                    or free(3) for)
                               (And no (void *) too)
                             (Imagine all the people)
                              (Living in parentheses))
                           ((You may say I'm a Schemer)
                             (But I'm not the only one)
                         (I hope someday you'll join us)
                               (And the world will be as
                        (lambda (f) (lambda (x) (f x)))))
                          ((Imagine those   continuations)
                         (I wonder              if you can)
                   (No need for              C or pointers)
               (A brotherhood                        of Dan)
                (Imagine all                      the people)
                (GCing all                          the world))
           ((You may say                          I'm a Schemer)
          (But I'm not                              the only one)
     (I hope someday                                you'll join us)
    (And the world                                        will be as
(lambda (f)                                     (lambda (x) (f x)))))))

Posted to comp.lang.scheme on January 17, 1996, for Scheme's twentieth birthday.

Tags: programming
26 Feb 2020

literate programming

I got started with literate programming many years ago, out of admiration for almost everything else i knew done by Donal Knuth, and tried my hand at it in some toyish projects in OCaml and Scheme. So it wasn't without lack of enthusiasm that i plunged into the literate world.

For reasons i've forgotten (probably better Emacs integration, perhaps simpler syntax) i chose noweb over funnelweb (perhaps, i would recommend the reverse these days, if it weren't for org's babel) and wrote a set of make functions to help create multi-file noweb projects. If memory serves, i was writing an MMIX interpreter in OCaml and a distributed job scheduler in Scheme, and i tried in both cases to use a literate style.

On paper, literate programming is great. Just take a look at Knuth's MMIXware book, Hanson's C Interfaces and Implementations or Jay MacCarthy's blog: that's definitely how i want to read and understand programs for fun and profit.

As i soon learned, that's however not the way i wanted to write programs.

Programming is for me mostly a dialectic process of understanding problems and their solutions, always evolving and always provisional (and most often buggy!), and i seemed never to be able to frozen the solution at hand in a structure that were not in flux. When changing the structure of your solution means rewriting your essay, there's too much friction.

In addition, i tend to build my programs in a bottom-up fashion. Reorganisation comes later, so it's again a chore to start by trying to write down an essay: it's only when i've reached the top that i can see the landscape and properly describe it.

And of course there's the REPL: interactive programming using a live image is, or seemed to be for a long time, at odds with a literate approach to writing your programs.

So, as many others, i just shelved LP for the day i would write a book, and there it's been, untouched, until recently.

It all began when i found myself, more and more often, writing solutions to customer problems that needed the collaboration of several programming languages, with a sprinkle of shell scripting. That need is even more pressing when moreover one routinely uses DSLs (as we do in BigML with Flatline or WhizzML). Together with all the programming bits, one needs of course to deliver decent documentation explaining it. So it's the perfect storm for an org babel document, specially when one combines it with poly-org. This package allows a truly poly-mode experience in the org buffer itself: when one enters a src block, emacs switches to its language's major mode, and one can do things like sending expressions to the corresponding REPL, or propertly indenting without having to pop-up a new buffer as in babel's default modus operandi. It might seem a little thing, but to me it's made a great difference. Add to that the all but excellent export capabilities of org, and the easiness with which one can tangle out different files from the same org file, together with having at your fingertips the immense gamut of functionality offered by org mode, and it's very difficult not to become a fan of this form of literate programming.

Another setting in which babel is lately winning my heart is as a tool for writing emacs lisp packages. It all began with writing my init.el using org, at first tangling it semi-automatically on emacs startup. But then i discovered literate-elisp, a package that teaches emacs to load org files directly, reading the elisp source blocks within, and, most importantly, retaining the source code positions for them. That means that, when one jumps to a function or variable definition1, one lands directly into the org file: no intermediate tangled elisp is needed. That's, again, incredibly handy, specially when combined with poly-org. I can now write packages like signel in an org file that is directly exportable to a blog post, and i find myself writing at the very same time and place both the emacs lisp program and the blog post explaining how it works.

All that said, this new penchant for LP has its obvious limits: i'm aware that it is not going to be comfortable for projects that aren't as self-contained and small as the ones i mention above. For instance, i don't think we could write our clojure backend as a bunch of org files, much as i wish i were wrong.



i've had since forever (it's been so long that i've forgotten where i stole it) a little utility function that let's me conveniently jump a la geiser using M-. to any emacs lisp definiton:

(defun elisp-find-definition (name)
  "Jump to the definition of the function (or variable) at point."
  (interactive (list (thing-at-point 'symbol)))
  (cond (name
         (let ((symbol (intern-soft name))
               (search (lambda (fun sym)
                         (let* ((r (save-excursion (funcall fun sym)))
                                (buffer (car r))
                                (point (cdr r)))
                           (cond ((not point)
                                  (error "Found no definition for %s in %s"
                                         name buffer))
                                  (switch-to-buffer buffer)
                                  (goto-char point)
                                  (recenter 1)))))))
           (cond ((fboundp symbol)
                  (funcall search 'find-function-noselect symbol))
                 ((boundp symbol)
                  (funcall search 'find-variable-noselect symbol))
                  (message "Symbol not bound: %S" symbol)))))
        (t (message "No symbol at point"))))
Tags: programming emacs
30 Nov 2016

donald stewart on haskell in the large

Last October Don Stewart gave a very interesting talk at CodeNode on his experience using Haskell at a large scale. And by large, he means millions of lines of code. Although he wasn't allowed to talk about the very specifics of the code, his talk is full of interesting remarks that i found resonate with my experience. They actually convinced me that the next language i should try in production should be OCaml.

The first thing that drew my attention in this talk was the sheer amount of code they've written: 3.4 millions of lines of code. Taking into account Haskell's conciseness, that's quite a lot. I've got experience with very substantial projects written in functional languages such as Clojure with two orders of magnitude less code! Althouth it's true that running on top of the JVM lets you reuse hundreds of libraries, and that might be closing the gap a bit.

A second interesting point was the fact that Don's team is using interpreted code (via their own Haskell compiler): virtual machines have many advantages, yes.

And i wasn't surprised at all to learn that their code is strict by default: my experience with the limited laziness of Clojure has been bad enough to reinforce my exasperation with space leaks in my hobbyist Haskell. Laziness is pretty cool, but also the source of the most difficult to track down bugs i've encountered in the latest years. More difficult in fact than any memory leak i can remember in those dreadful C++ years: there at least i had a very clear, fool-proof methodology to avoid those problems (unlike the case of mutable state: the peace of mind bought by immutability is priceless).

I also found myself nodding enthusiastically at Don's plea for sticking to basic, simple constructs and to avoid as much as possible the line noise of excessive infix operators. One can write Perl in any language. And, for an old lisper, his advocacy of a programming style based on writing interpreters for little languages, sounded right on the money.

The thing he mentions over and over that i haven't experienced first-hand in a real-life, sizeable project is the power of a real type system (no, Java's or C++'s aren't), and that's something i'd really like to try.

Then at some point he mentions they have their own module system, not as good as ML's, but… and i realised that there's a language out there that has all the good things the talk mentions and needs none of the workarounds they use. It's called OCaml, and i've been meaning to use it in earnest for some years now. Maybe it's finally time i dust that part of my bookshelf, if only to compare real-world development in a properly typed language with my experience with dynamic languages such as Clojure (which has not been by any means as bad as static typists seem to fear).

At any rate, there's lots more in this talk that is surely more interesting than my ramblings: recommended!

Tags: programming
06 Aug 2014

spj's y-combinator in scheme

Recently i bought a second-hand copy of Simon Peyton Jones' classic The implementation of functional programming languages, and i've been having some very pleasant reading hours during the last week.

For instance, i've enjoyed Simon's clear and to-the-point introduction to the lambda calculus and, more concretely, his really lucid explanation of the workings of recursion and the Y combinator. As a matter of fact, i've enjoyed it so much that i just wanted to briefly reproduce it here, paraphrased in my surely poorer style, just for the fun of it.

In the pure lambda calculus, there's no direct way of defining a function that refers to itself: all you have is the ability to define anonymous lambda abstractions. Using Scheme's syntax, that means that you can use lambda as much as you want, as in

((lambda (x) (+ 1 x)) 3)

but define (recursive or not) is not available. For instance, you cannot define a list-length recursive function like this:

(define list-length
  (lambda (lst) (if (null? lst) 0 (+ 1 (list-length (cdr lst))))))

the main problem being the appearance of list-length in the lambda expression's body. So the general issue we want to solve is how to define a function f of the form

f = (... f ...)

without referring to f in the body. When all you have is lambda calculus, everything looks like a function: we can rewrite the right hand side of the above equation as a lambda application:

f = ((lambda (x) (... x ...)) f) := (h f)

where we've defined h := (lambda (x) (... x ...)). For instance, in our length example, h is the function

h = (lambda (x)
      (lambda (lst)
        (if (null? lst) 0 (+ 1 (x (cdr lst))))))

and we're trying to solve the equation (h list-length) = list-length. In other words, you give me h, a function taking a function as argument and returning another function, and i need to give you back an f that is its fixed point, i.e., an f satisfying f = (h f).

So all i need to do is finding a function (let's call it Y) that takes h as its argument and returns f, that is, by definition of Y:

f = (Y h)

but, as we've seen f equals (h f), which in turn is (h (Y h)) which gives us the defining equation of Y:

(h (Y h)) = (Y h)

If i had that magic Y function (or combinator, since it won't use any global variables), i could for instance immediately compute list-length as:

(Y (lambda (x) (lambda (lst) (if (null? lst) 0 (+ 1 (x (cdr lst)))))))

The nicest kind of wishful thinking is the one that comes true: we actually know of a lambda expression that satifies the Y combinator's defining equation (Y h) = (h (Y h)), namely:

(lambda (h) ((lambda (x) (h (x x))) (lambda (y) (h (y y)))))

You can check it by yourself: (Y h) is just the body of the expression above:

(Y h) => ((lambda (x) (h (x x))) (lambda (y) (h (y y))))

and now apply the first lambda to its argument (the second lambda) to obtain:

=> (h ((lambda (y) (h (y y))) (lambda (y) (h (y y)))))
=> (h (Y h))

as desired! The only wrinkle left is that, as defined, our Y combinator cannot be directly coded in Scheme, because Scheme is a strict language and those pesky (x x) and (y y) applications cause an infinite recursion.

But that's just an accidental difficulty due to Scheme's evaluation strategy being too eager that has a standard solution: a beta abstraction. If you have a function F in Scheme, you can define a totally equivalent function G by (define G (lambda (x) (F x))). We say that G is a beta abstraction of F, or that F is a beta reduction of G. And the usual reason you would beta abstract a function in Scheme is in order to delay the evaluation of its body, just what the doctor ordered.

Remember, h is a function taking another function as an argument, so (x x) and (y y), which are used as h's arguments, must be functions, so they're equivalent to (lambda (a) ((x x) a)) and (lambda (a) ((y y) a)), so we can define Y, if we're so inclined, also as:

(define Y
  (lambda (h)
    ((lambda (x) (h (lambda (a) ((x x) a))))
     (lambda (x) (h (lambda (a) ((x x) a)))))))

and now

(define list-length
  (Y (lambda (f) (lambda (lst) (if (null? lst) 0 (+ 1 (f (cdr lst))))))))

As you see, list-length's body doesn't mention itself and, as you can check by yourself with your favourite interpreter, the last two definitions are valid and working scheme code.

In case you still feel a bit unsure as to how our Y is pulling its trick, you can dispel any misgivings just by seeing how the combinator transforms the given h step by step, always reproducing h in the process (it's only the branch in h that doesn't call its argument, the recursion's base case, that breaks the implicit infinite loop induced by Y). Let's do it with our list-length example, reducing its application to, say, the list '(1 2), step by step:

(define W (lambda (f) (lambda (a) ((f f) a))))

(list-length '(1 2))
  ;; definition of list-length as (Y h)
  => ((Y h) '(1 2))
  ;; definiton of Y and W
  => (((lambda (f) (h (W f))) (lambda (f) (h (W f)))) '(1 2))
  ;; beta-reduction of the first lambda applied to the second
  => ((h (W (lambda (f) (h (W f))))) '(1 2))
  ;; definition of h for the case of list-length
  => ((lambda (lst)
        (if (null? lst) 0 (+ 1 ((W (lambda (f) (h (W f)))) (cdr lst)))))
      '(1 2))
  ;; function application
  => (+ 1 ((W (lambda (f) (h (W f)))) '(2)))
  ;; definition of W
  => (+ 1 (((lambda (f) (h (W f))) (lambda (f) (h (W f)))) '(2)))

and notice what just happened: we started with

(((lambda (f) (h (W f))) (lambda (f) (h (W f)))) '(1 2))

and, after one iteration of the implicit recursion, reduced it to:

(+ 1 (((lambda (f) (h (W f))) (lambda (f) (h (W f)))) '(2)))

i.e., 1 plus the same form we started with but applied to the list (2) instead of the list (1). By following the exact same reduction steps, we can simplify the above to:

(+ 1 1 (((lambda (f) (h (W f))) (lambda (f) (h (W f)))) '()))

and now, when expanding the same form to the right applied to the empty list, the base case of the recursion (the true branch of the (null? lst) test in h), will kick in and reduce the above to

(+ 1 1 0) => 2

avoiding that the self-reproducing Y forms keep reappearing ad infinitum.

Tags: programming
19 Jun 2013

where my mouth is

For many years, i've been convinced that programming needs to move forward and abandon the Algol family of languages that, still today, dampens the field. And that that forward direction has been signalled for decades by (mostly) functional, possibly dynamic languages with an immersive environment. But it wasn't until recently that i was able to finally put my money where my mouth has been all these years.

A couple of years ago i finally became a co-founder and started working on a company i could call my own (i had been close in the past, but not really there), and was finally in a position to really influence our development decisions at every level. Even at the most sensitive of them all: language choice.

During my previous professional career i had been repeatedly dissapointed by the lack of conviction, when not plain mediocrity, of the technical decision makers in what seemed all kinds of company, no matter how cool, big or small: one would always end up eaten alive by the Java/C++ ogre, with a couple testimonial scripts (or perhaps some unused scheme bindings) to pay lip service to the hipsters around.

Deep down, the people in charge either didn't think languages make any difference or, worse, bought into the silly "availability of programmers" argument. I'm still surprised anyone would believe such a thing. If i guy came telling me that he wanted to program only in, say, Java because that's what he knows best and that he doesn't really feel prepared or interested in learning and using, say, Clojure (or any other language, really), i wouldn't hire him in a million years, no matter what language my project were using, and no matter how many thousands of candidates like this one i had at my disposal.

Give me programmers willing to learn and eager to use Lisp or Haskell (or Erlang, ML, OCaml, Smalltalk, Factor, Scala…) any day, even if we happen to be using C++, for goodness sake! Those are the ones one needs, scanty as they might be.

Given the sad precedents, when i embarked in my new adventure, i was careful not to try to impose on anyone my heretical ideals: they had to be accepted on their own grounds, not based on my authority. But i was finally lucky enough to meet a team of people with enough intellectual curiosity and good sense (which is, again, actually a pre-condition to be in a startup). People, let me tell you, with no experience whatsoever in languages outside the mainstream, but people that, nonetheless, were good programmers. And when you give a good programmer a good language, good things happen.

Come to think of it, it's true that, with good programmers, the language doesn't matter: they'll choose the best one, or follow the advice of more experienced colleagues, and quickly take advantage of any extra power the novel environment has to offer.

Our experience so far could hardly be a better counterexample against algol naysayers.

Our backend is 99.4% coded in Clojure, and 66% of the team had never programmed seriously in any lisp, let alone Haskell or Prolog (heck, not even i (the remaining 33%) had actually tried anything non-mainstream for real in a big project!) Maybe some Ruby, and lots and lots of Java and C and C++. But they accepted the challenge after reading around and learning the basics, and 3 months later you couldn't take Clojure from their prying hands. More importantly, they had fun discovering they could also be Dr Jekyll.

Of course, lots of effort is a must, and someone with a bit of experience to guide the newbies is probably necessary. In particular, extensive code reviews were of the essence in our case, and i had never read and criticised this many lines of code in a similar amount of time. But know what? I prefer to "waste" time in code reviews to employ at least as much writing factories of factories of factories and similar boilerplate (configured of course using XML) and chasing bugs in the resulting soup. Not to mention that, again, you need code reviews no matter the language you use: the trick of having a code base so ugly that nobody would review it doesn't fly.

So yes, it's more difficult to find hackers, but you need to find them anyway. And yes, it may require more serious software engineering, but, again, you need it anyway. So why shouldn't we use the best tools at our disposal? I can finally tell you: been there, done it, and it does work.

Tags: programming
03 Dec 2009

enjoying haskell

I've been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i'm beginning to really appreciate Haskell's type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i've finally captured the essence of the problem (a litmus test being that then it's natural to extend the solution to cases i hadn't thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i'm finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i'm closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar ("once my code compiles, it works"), and i think it's mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don't get the solution right.

There are two more ways that Haskell's type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i'm all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell's type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT's fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function's signature.

The second way Haskell's lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i'm often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell's compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i'm finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i've ever seen (just pick a random post at, say, a Neighbohood of Infinity and you'll see what i mean; or read Richard Bird's Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i've used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that's perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i'm convinced that that's the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don't heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There's no silver bullet.

All that said, please don't read the above as an apostasy letter announcing the embracement of a new religion. There's still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i've been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that's a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i'm missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

Tags: programming
10 Jul 2007

erlang now!

I don't know whether Monty Python ever wrote a gag on programming languages, but if they did, this Erlang video must be it. The funniest thing is that it is pretty serious, and does a great job showing one of my most cherished abilities when using dynamic languages, namely, adding new functionality to a running system on the fly. As for the Monty Python bit, well, you have to see the to know what i mean: i kept laughing out loud during most of its twelve minutes (those Ericsson engineers seem to be taken from The Larch, but then maybe it's just my sense of humor).

Update: Mike, one of the engineers in the film, has been kind enough to post a comment about the experience, which I'm reproducing here for your convenience:

We gave a well received "demo" in 1990, conjunction with ISS90, a big telecoms conference in Stockholm. We made this movie to record the demo. We actually used a professional company to do the filming, but I won't mention their names as they would probably sue me for libel.

The worse of it all, is that we were deadly serious at the time. The Monty Python aspect must be due to our backgrounds. Of the people involved, Joe is English, Robert is Swedish - but brought up in Australia, Bjarne is also Swedish but spend some formative years in Scotland and I'm half Welsh half Scottish with some Irish thrown in somewhere.

In 1990, when we made the movie, the very idea of using anything other than C, Plex, assembly language etc to design embedded concurrent systems was heresy, we expected to take the world by storm. It seems that the cheap communication and multi core processors are giving Erlang a boost 16 years later. Well at least in the intervening time we have tested the hell out of Erlang and its implementations!


PS. If you look carefully at the film, you can see that Erlang at that time had a Prolog like syntax.

PPS. I can't watch the movie without laughing (at) myself

While we're at it, let me mention that i like many a thing of Erlang. It's a curious mix of good stuff: a simple syntax and kind of minimalist flavor that reminds of Scheme, pattern matching and functional variables like Haskell's, and what amounts to a programming paradigm of its own based on mailboxes and processes (which, as you surely know, are amazingly cheap). Also worth mentioning is Erlang's error handling philosophy, which is, at first sight, a bit startling (i'm not still sure if it makes perfect sense, but after playing with the language a bit, it looks like it does--this is an interesting post on those matters). Definitely worth a look: see for instance Joe Armstrong's thesis, or, if you have some bucks to spare, the forthcoming Programming Erlang.

Tags: programming auld
17 Mar 2006

programmers go bananas

Introduction: lists galore

I learned programming backwards, plunging right on into C and, shortly after, C++ and Java from the very beginning. I was knee deep in complex data structures, pointers and abstruse template syntax in no time. And the more complex it all felt, the more i thought i was learning. Of course, i was clueless.

Reading SICP and learning about functional programming changed it all. There were many occasions for revelation and awe, but one of my most vivid recollections of that time is my gradual discovery of the power of simplicity. At about half way into SICP i realised in wonder that that beautiful and powerful world was apparently being constructed out of extremely simple pieces. Basically, everything was a list. Of course there were other important ingredients, like procedures as first-class objects, but the humble list was about the only data structure to be seen. After mulling on it for a little bit, i saw where lists draw their power from: recursion. As you know, lists are data types recursively defined: a list is either the empty list or an element (its head) followed by another list (its tail):

list = []
list = a : list

where i'm borrowing Haskell's notation for the empty list ([]) and the list constructor (:), also known by lispers as () and cons. So that was the trick, i thought: lists have recursion built-in, so to speak, and once you've read a little bit about functional programming you don't need to be sold on the power and beauty of recursive programs.

It is often the case that powerful and beautiful yet simple constructs have a solid mathematical foundation, and only when you grasp it do you really realize how powerful, beautiful and amazingly simple that innocent-looking construct is. Lists, and recursive operations on them, are an excellent case in point. But the path connecting them to their mathematical underpinnings is a long and winding one, which lays in the realm of Category Theory.

I first became acquainted of the relationship between categories and recursive programming reading Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer, Maarten Fokkinga and Ross Paterson. Albeit very enjoyable, this paper presupposes a high degree of mathematical sophistication on the reader's side. I will try in this article to give you a simplified overview of the concepts involved, including Category Theory, its application to programming languages and what funny names like catamorphism, anamorphism or lambda-lifting have to do with your everyday list manipulations. Of course, i'll be only scratching the surface: interspersed links and the Further reading section provide pointers to more in-depth explorations of this wonderland.

Categorical interlude

Categories are (relatively) simple constructs. A category consists of a set O of objects, and a set A of arrows between elements of O. Arrows are composable: if there's an arrow from a to b, and another one from b to c, there must be an arrow from a to c (where a, b and c are elements of O). Besides, they are associative: if you have arrows from a to b, b to c, and c to d, you can go from a to d via two different paths, namely, first from a to c and then from c to d, or first from a to b and then from b to d. Finally, for each element a in O there's an identity arrow which goes from a to itself (called an identity), such that following this arrow changes nothing. These properties are better visualized with a diagram (or a bit of mathematical notation), as shown in the image on the right.


A category captures a mathematical world of objects and their relationships. The canonical example of a category is Set, which contains, as objects, (finite) sets and, as arrows, (total) functions between them. But categories go far beyond modeling sets. For instance, one can define a category whose objects are natural numbers, and the 'arrows' are provided by the relation "less or equal" (that is, we say that there is an arrow joining two numbers a and b if a is less or equal than b)/./ What we are trying to do with such a definition is to somehow capture the essence of ordered sets: not only integers are ordered but also dates, lemmings on a row, a rock's trajectory or the types of the Smalltalk class hierarchy. In order to abstract what all those categories have in common we need a way to go from one category to another preserving the shared structure in the process. We need what the mathematicians call an isomorphism, which is the technically precise manner of stating that two systems are, in a deep sense, analogous; this searching for commonality amounts to looking for concepts or abstractions, which is what mathematics and (good) programming is all about (and, arguably, intelligence itself, if you are to believe, for instance, Douglas Hofstadter's ideas).

To boot, our definition of a category already contains the concept of isomorphic objects. Think of an arrow from a to b as an operation that transforms a in b. An arrow from b to a will make the inverse transformation. If composing both transformations gives you the identity, you are back to the very same object a, and we say that a and b are isomorphic: you can transform one into the other and back at will. In a deep sense, this concept captures a generic way of expressing equality that pervades all maths: if you're not afraid of a little bit of maths, Barry Mazur's essay When is a thing equal to some other thing? is an excellent introduction to Category Theory with an emphasis in the concept of equality. Among many other things, you will learn how the familiar natural numbers can be understood as a category, or how an object is completely defined by the set of its transformations (and, therefore, how to actually get rid of objects and talk only of transformations; i know this is stretching and mixing metaphors (if not plain silly), but this stress in arrows, as opposed to objects, reminded me of Alan Kay's insistence on focusing on messages rather than objects). Another introductory article with emphasis on categories as a means to capture sameness is R. Brown and T. Porter's Category Theory: an abstract setting for analogy and comparison.

Not only objects inside a category can be transformed into each other. We reveal the common structure of two disjoint categories by means of a functor mapping across two categories. A functor consists of two functions: one that maps each object of the first category to an object in the second, and another one putting in correspondence arrows in one category with arrows in the second. Besides, these functions must preserve arrow composition. Let me spell this mathematically. Consider to categories, C and C' with object sets O and O' and arrow sets A and A'. A functor F mapping C to C' will consist then of two functions (Fo, Fa); the first one taking elements of O to elements of O':

Fo: O -> O'

Fo(a) in O' for every a in O

and the second one taking arrows from A to arrows in A':

Fa: A -> A'

Fa(f) in A' for every f in A

and such that, if f is an arrow from a to b in C, Fa(f) is an arrow from Fo(a) to Fo(b) in C'. Moreover, we want that following arrows in C is 'analogous' to following them in C', i.e., we demand that

Fa(fg) = Fa(f)Fa(g)

In the left hand side above, we are composing two arrows in C and then going to C', while in the right hand side we first take each arrow to C' and, afterwards, compose them in there. If C and C' have the same structure, these two operations must be equivalent. Finally, F must preserve identities: if i is the identity arrow for an element a in O, Fa(i)must be the identity arrow for Fo(a) in O'. The diagram on the left shows a partial graph (i'm not drawing the identity arrows and their mappings) of a simple functor between two categories, and ways of going from an object a in the first category to an object x in the second one which are equivalent thanks to the functor's properties.


As you can see in this simple example, the functor gives us the ability of seeing the first category as a part of the second one. You get a category isomorphism in the same way as between objects, i.e., by demanding the existence of a second functor from C' to C (you can convince yourself that such a functor does not exist in our example, and, therefore, that the two categories in the diagram are not isomorphic).

You have probably guessed by now one nifty property of functors: they let us going meta and define a category whose objects are categories and whose arrows are functors. Actually, Eilenberg and MacLane's seminal paper General theory of natural transformations used functors and categories of categories to introduce for the first time categories (natural transformations are structure-preserving maps between functors: this Wikipedia article gives an excellent overview on them).

But enough maths for now: it is high time to show you how this rather abstract concepts find their place in our main interest, programming languages.

Categories and programming languages

About the only similarity between C and Haskell programming is that one spends a lot of time typing ASCII arrows. But of course, Haskell's are much more interesting: you use them to declare the type of a function, as in

floor:: Real -> Int

The above stanza declares a function that takes an argument of type real and returns an integer. In general, a function taking a single argument is declared in Haskell following the pattern

fun:: a -> b

where a and b are types. Does this ring a bell? Sure it does: if we identify Haskell's arrows with categorical ones, the language types could be the objects of a category. As we have seen, we need identities

id:: a -> a
id x = x

and arrow composition, which in Haskell is denoted by a dot

f:: b -> c
g:: a -> b
fg:: a -> b -> c
fg = f . g

Besides, associativity of arrow composition is ensured by Haskell's referential transparency (no side-effects: if you preserve referential transparency by writing side-effect free functions, it won't matter the order in which you call them): we've got our category. Of course, you don't need Haskell, or a statically typed language for that matter: any strongly typed programming language can be modelled as a category, using as objects its types and as arrows its functions of arity one. It just happens that Haskell's syntax is particularly convenient, but one can define function composition easily in any decent language; for instance in Scheme one would have

(define (compose f g) (lambda (x) (f (g x)))

Functions with more than one arguments can be taken into the picture by means of currying: instead of writing a function of, say, 2 arguments:

(define (add x y) (+ x y))
(add 3 4)

you define a function which takes one argument (x) and returns a function which, in turn, takes one argument (y) and returns the final result:

(define (add x) (lambda (y) (+ x y)))
((add 3) 4)

Again, Haskell offers a pretty convenient syntax. In Haskell, you can define add as:

add x y = x + y

which gets assigned, when applied to integers, the following type:

add:: Int -> (Int -> Int)

that is, add is not a function from pairs of integers to integers, but a function that takes an integer and returns a function of type Int -> Int. Finally, we can also deal with functions taking no arguments and constant values by introducing a special type, called unit or 1 (or void in C-ish), which has a unique value (spelled () in Haskell). Constants of our language (as, e.g., True or 43.23) are then represented by arrows from 1 to the constant's type; for instance, True is an 1 -> Boolean arrow. The unit type is an example of what in category theory is known as a terminal object.

Now that we have successfully modelled our (functional) programming language as a category (call it C), we can use the tools of the theory to explore and reason about the language constructs and properties. For instance, functors will let me recover the original motivation of this post and explore lists and functions on them from the point of view of category theory. If our language provides the ability to create lists, its category will contain objects (types) of the 'list of' kind; e.g. [Int] for lists of integers, [Boolean] for lists of Booleans and so on. In fact, we can construct a new sub-category CL by considering list types as its objects and functions taking and returning lists as its arrows. For each type a we have a way of constructing a type, [a] in the sub-category, i.e., we have a map from objects in C to objects in CL. That's already half a functor: to complete it we need a map from functions in C to functions in CL. In other words, we need a way to transform a function acting on values of a given type to a function acting on lists of values of the same type. Using the notation of the previous section:

Fo(a) = [a]
Fa(f: a -> b) = f': [a] -> [b]

Fa is better known as map in most programming languages. We call the process of going from f to f' lifting (not to be confused with a related, but not identical, process known as lambda lifting), and it's usually pretty easy to write an operator that lifts a function to a new one in CL: for instance in Scheme we would write:

(define (lift f) (lambda (lst) (map f lst)))

and for lift to truly define a functor we need that it behaves well respect to function composition:

(lift (compose f g)) = (compose (lift f) (lift g))

We can convince ourselves that this property actually holds by means of a simple example. Consider the function next which takes an integer to its successor; its lifting (lift next) will map a list of integers to a list of their successors. We can also define prev and (lift prev) mapping (lists of) integers to (lists of) their predecessors. (compose next prev) is just the identity, and, therefore, (lift (compose next prev)) is the identity too (with lifted signature). But we obtain the same function if we compose (lift next) and (lift prev) in CL, right? As before, there's nothing specific to Scheme in this discussion. Haskell even has a Functor type class capturing these ideas. The class defines a generic lift operation, called fmap that, actually, generalizes our list lifting to arbitrary type constructors:

fmap :: (a -> b) -> (f a -> f b)

where f a is the new type constructed from a. In our previous discussion, f a = [a], but if your language gives you a way of constructing, say, tuples, you can lift functions on given types to functions on tuples of those types, and repeat the process with any other type constructor at your disposal. The only condition to name it a functor, is that identities are mapped to identities and composition is preserved:

fmap id = id
fmap (p . q) = (fmap p) . (fmap q)

I won't cover usage of type constructors (and their associated functors) other than lists, but just mention a couple of them: monads, another paradigmatic one beautifully (and categorically) discussed by Stefan Klinger in his Programmer's Guide to the IO Monad - Don't Panic (also discussed at LtU), and the creation of a dance and music library, for those of you looking for practical applications.

To be continued…

Returning to lists, what the lifting and categorical description above buys us is a way to formalize our intuitions about list operations, and to transfer procedures and patterns on simple types to lists. In SICP's section on Sequences as conventional interfaces, you will find a hands-on, non-mathematical dissection of the basic building blocks into which any list operation can be decomposed: enumerations, accumulators, filters and maps. Our next step, following the bananas article i mentioned at the beginning, will be to use the language of category theory to provide a similar decomposition, but this time we will talk about catamorphisms, anamorphisms and similarly funny named mathematical beasts. What the new approach will buy us is the ability to generalize our findings beyond the list domain and onto the so-called algebraic types. But this will be the theme of a forthcoming post. Stay tunned.

Further reading

The best introductory text on Category Theory i've read is Conceptual Mathematics : A First Introduction to Categories by F. William Lawvere (one of the fathers of Category Theory) and Stephen Hoel Schanuel. It assumes no sophisticated mathematical background, yet it covers lots of ground. If you feel at home with maths, the best option is to learn from the horse's mouth and get a copy of Categories for the Working Mathematician, by Mac Lane.

The books above do not deal with applications to Computer Science, though. For that, the canonical reference is Benjamin Pierce's Basic Category Theory for Computer Scientists, but i find it too short and boring: a far better choice is, in my opinion, Barr and Well's Category Theory and Computer Science. A reduced version of Barr and Well's book is available online in the site for their course Introduction to Category Theory. They are also the authors of the freely available Toposes, Triples and Theories, which will teach you everything about monads, and then more. Marteen Fokkinga is the author of this 80-pages Gentle Introduction to Category Theory, with a stress on the calculational and algorithmical aspects of the theory. Unless you have a good mathematical background, you should probably take gentle with a bit of salt.

Let me close by mentioning a couple of fun applications of Category Theory, for those of you that know a bit about it. Haskell programmers will like this implementation of fold and unfold (as a literate program) using F-(co)algebras and applied to automata creation, while those of you with a soft spot for Physics may be interested in John Baez's musings on Quantum Mechanics and Categories.

Tags: programming auld
12 Feb 2006

continuation kata

If you have a little Scheme under your belt, just plunge into the challenge right now:

We want to find three integers, x y and z such that both are in the range [2,9] and that they are the lengths of the sides of a right triangle (x^2 = y^2 + z^2). The challenge is to write two procedures, in-range and fail such that the solution to the problem looks like this:

(let ((x (in-range 2 9))
      (y (in-range 2 9))
      (z (in-range 2 9)))
  (if (= (* x x) (+ (* y y) (* z z)))
      (list x y z)
      (fail "no solution")))

In other words, we are after an elegant way of implementing search backtracking: the possible values of the (x, y, z) triplets must be checked in turn, until a solution is found or we exhaust the search space. This is the kind of problem that declarative languages like Prolog solve in a breeze. Scheme does not fare bad, either: the right solution is just a screenful long. Keep on reading if you feel like getting some hints.

I've stolen this little exercise from an excellent talk by Marc Feeley on writing a Scheme to C compiler. The example pops up when Marc is discussing what makes such a compiler tricky (that is, what Scheme features are missing in C): tail calls, garbage collection, closures and first class continuations. So here goes the first hint: the solution uses call/cc.

Continuations 101

I couldn't put a finger on it, but there's something pesky about continuations: in my experience, they're hard to grasp at first. And yet, the concept is quite simple: whenever you are evaluating an expression E, there's someone waiting to do something with the value of E; for instance, in the process of evaluating:

(+ 3 (* 5 4))

if you take E to be (* 5 4), the interpreter is waiting for your result to add 3… this thing to be done to the result of evaluating a (sub)expression can be, quite naturally, represented by a procedure that takes a single argument; in our example, this procedure could be:

(lambda (v) (+ 3 v))

or, if you are evaluating the whole thing on a REPL toplevel that will print the result, maybe something roughly equivalent to:

(lambda (v) (print (+ 3 v)))

It is this (usually invisible) procedure what we call the current continuation. Every expression has a continuation, but in many languages it is only implicit. Scheme gives you the possibility of accessing it. If you write:

(+ 3 (call/cc (lambda (cont) (* 5 4))))

that could be translated as

(+ 3
   (let ((cont (lambda (v) (+ 3 v)))
      (* 5 4)))

Of course, things get only interesting when you use cont; for instance, it is hopefully clear that

(+ 3 (call/cc (lambda (cont) (* 5 4) (cont 18))))

evaluates to 21. Besides writing silly examples and bore your readers to tears, you can put this stuff to good use in a variety of ways. The most immediate is to early scape from a long computation. This procedure multiplies all the elements in a given list:

(define (mult lon)
  (cond ((null? lon) 1)
        ((zero? (car lon)) 0)
        (else (* (car lon) (mult (cdr lon))))))

but it will do a lot of useless work when lon contains a 0. First class continuations allow a better way:

(define (mult-k lon cont)
  (cond ((null? lon) 1)
        ((zero? (car lon)) (cont 0))
        (else (* (car lon) (mult-k (cdr lon) cont)))))

(define (mult lon) (call/cc (lambda (cont) (mult-k lon cont))))

If you understand why this is better than the previous version, you're on your way to understanding continuations as non-local exits. And if by now you're thinking that continuations are not, after all, a big deal, quick tell me what

(((call/cc (lambda (k) k)) (lambda (x) x)) "HEY!")

evaluates to, and why. By the way, this nice micro-kata is from TSPL's section on continuations, which provides a nice, if brief, introduction to call/cc, including a description of another typical use case: the implementation of threading via coroutines. A more extended (but still not too long) discussion of the very same issues can be found in Dann Friedman's beautiful paper Applications of Continuations.

Finally, if you have a little time in your hands, read through this interesting ll1 mail thread, where guys like Mathias Felleisen, Avi Bryant or Paul Graham have their say on continuations and their uses.


Since coroutines are not needed to solve our original kata, let me gloss over them and jump to the next typical use of continuations: backtracking. A key thing to remember about call/cc is that the continuation passed to the lambda form is a first class value. Among other things, that means that you can store it and use it in a totally unrelated context. Or, in other words, Scheme introduces time-travel in your toolset, with its associated wonders (as in Schelog, a Prolog implemented in Scheme) and headaches.

Let's see how backtracking can be implemented with the aid of persistent continuations using an example adapted from Jacques Chazarain's book Programmer avec Scheme (which, by the way, makes for an excellent second book on Scheme, provided you read French). Writing a procedure that looks for the first occurrence of an element in a list that satisfies a given predicate is a piece of cake:

(define (search lst p?)
  (cond ((null? lst) #f)
        ((pair? lst) (or (search (car lst) p?)
                         (search (cdr lst) p?)))
        ((p? lst) lst)
        (else #f)))

where we accept list of lists too. But what if i want to get all findings one by one? I'd like to have a solution generator that returns a procedure returning, in successive calls, each of the occurrences of a solution in the given list. That is, we want to be able to write code like this one:

(define solutions
  (search-generator '(0 ((1 a) 2) b (b c) (((6)))) number?)
(solutions) => 0
(solutions) => 1
(solutions) => 2
(solutions) => 6
(solutions) => 'done

Persistent continuations allow a very clean implementation of search-generator. The strategy is to start the search, and each time we find an element in the list satisfying the predicate store the current continuation (which will keep on searching for more solutions) for later invocations. We then return a procedure that calls the continuation and stores a new one which will resume the search with the rest of the list. You can have a little fun trying to find the solution before reading it below. Or, if you get stuck, to read Ferguson and Duego's excellent Call with Current Continuation Patterns, where you'll find examples of call/cc in action. Got your answer? Well, let's compare:

(define (search-generator lst p?)
  (let ((success '?)) ;; placeholder for the current continuation
    (letrec ((cont-success ;; next continuation
              (lambda (x) (search lst)))
              (lambda (elem)
                (cond ((null? elem) 'done)
                      ((pair? elem) (search (car elem))
                                    (search (cdr elem)))
                      ((p? elem) (call/cc
                               (lambda (k) ;; next search will continue from here
                                 (set! cont-success k)
                                 (success elem))))
                      (else 'done)))))
      (lambda () (call/cc (lambda (k)
                       (set! success k)
                       (cont-success 'done)))))))

Once you grasp how this works, you have all the ingredients to solve our original kata.

Not only Scheme

There are other languages with support for first class continuations, SML/NJ and Ruby being the most salient cases. Besides, Seaside implements them for our beloved Smalltalk.

You don't need call/cc to play some of the tricks discussed above. For instance, one can implement backtracking in Python (the linked article contains also an intro to continuations) or Standard SML (ditto, also with examples in Pascal, and, of course, SML), using a technique known as Continuation Passing Style, or CPS, that consist on passing explicitly the continuation to each of your functions. Explaining CPS would take another article, so, for now, i will let you explore it by yourself, but i'll mention that armed with CPS you can, essentially, play all the call/cc tricks. A few years ago, type theorist extraordinaire Benjamin C. Pierce asked in the Caml mailing list about cool CPS usages, and took the time to summarize the responses he got. It contains pointers to many mind-bending readings.

The solution

I almost forgot: if you give up finding our kata's solution or want to check it out, you'll find it in Marc Feeley's talk for the Montreal Scheme/Lisp Users Group. As i've mentioned, it deals with the implementation of a Scheme to C compiler (Marc is the lead developer of Gambit-C), which is based on CPS. The site contains links to the talk slides and videos, and a beautiful pure-scheme implementation of the compiler. Enjoy.

Tags: programming
05 Feb 2006

beyond mainstream object-oriented programming


After a few scheming years, i had come to view objects as little more than poor-man closures. Rolling a simple (or not so simple) object system in scheme is almost a textbook exercise. Once you've got statically scoped, first-order procedures, you don't need no built-in objects. That said, it is not that object-oriented programming is not useful; at least in my case, i find myself often implementing applications in terms of a collection of procedures acting on requisite data structures. But, if we restrict ourselves to single-dispatch object oriented languages, i saw little reason to use any of them instead of my beloved Scheme.

Things started to change recently due to my discovering the pleasures of Smalltalk. First and foremost, it offers a truly empowering integrated ambient to live and develop in. Second, if you're going to use objects, using the simplest, cleanest syntax will not hurt. Add to that some reading on the beautiful design principles underlying Smalltalk, and one begins to wonder if closures aren't, in fact, poor-man objects–or at least i do, whenever i fall in an object-oriented mood (i guess i'm yet not ready to reach satori).

But Scheme is not precisely an ugly or bad designed language, so i needed some other reason to switch language gears for my OO programming. I knew there's more than encapsulation or subtype polymorphism in object-land from my readings on CLOS (the Common Lisp Object System), or on Haskell's type classes (and its built-in parametric polymorphism), but i was after something retaining Smalltalk's elegance. And then i remembered that, when i was a regular lurker in the Tunes project's mailing lists and IRC channel, a couple of smart guys were implementing an OO language whose syntax was smalltalkish. That language (which, if memory servers, started life with the fun name who me?) has evolved during the last few years into a quite usable programming environment named Slate, started by Lee Salzman and currently developed and maintained by Brian Rice.

I've been reading about Slate during the last few days, and decided to learn it. What motivated me was discovering how Slate goes beyond mainstream object-oriented programming by incorporating well-known (but hardly used) and really powerful paradigms. In short, Slate improves Smalltalk's single-dispatch model by introducing and combining two apparently incompatible technologies: multiple dispatch and prototype-based programming. To understand the whys and hows of Slate, there's hardly a better way than reading Lee Salzman's Prototypes with Multiple Dispatch. The following discussion is, basically, an elaboration of Lee's explanation on the limitations of mainstream OO languages, and how to avoid them with the aid of PMD.

Note: the images are thumbnails of this PDF file, with clickable links inside.

Fishes and sharks

Let's start by showing why on earth would you need anything beyond Smalltalk's object system (or any of its modern copycats). Consider a simple oceanographic ecosystem analyser, which deals with (aquatic) Animals, Fishes and Sharks. These are excellent candidates for class definitions, related by inheritance. Moreover, we are after modeling those beasts' behaviours and, in particular, their reactions when they encounter each other: each time a Shark meets a Fish of other species, the Shark will swallow the other Fish, while when a Shark meets Shark they will fight. As a result of such fighting, Sharks get unhealthy, which regrettably complicates matters: wound sharks won't try to eat other fishes, and will swim away other sharks instead of fighting them. The image on the left provides a sketchy representation of the code we need to model our zoo. Waters are quickly getting muddled implementation-wise.


On the one hand, subtype polymorphism based just on the object receiving the encounter message: we need, in addition, to take into account the argument's concrete type to implement the desired behaviour. This is a well-known issue in single-dispatch languages, whose cure is, of course, going to multiple dispatching (see below). In particular, we want to avoid the need to modify existing classes whenever our hierarchy is extended.

On the second hand, varying state (exemplified here by the Shark's isHealthy instance variable complicates the implementation logic. As we will see, prototype-based languages offer a way to factor out this additional complexity.

Beyond single-dispatch

The need to adjust behaviour on the basis of the type of both a message receiver and its arguments arises frequently in practice. So frequently, that a standard way of dealing with it has been christened as the Visitor design pattern. The technique, also known as double-dispatch, is well known: you can see, for instance, how it's applied to arithmetic expressions in Smalltalk, or read about a generic implementation of multimethods in Python (which also includes a basically language-independent discussion on the issues at hand). If you happen to be a C++ programmer, you may be tempted to think that global functions and overloading solve the problem in that language. Well, think twice: a proper implementation of multiple dispatch in C++ needs of RTTI and templates, as shown in this article.


CLOS and Dylan are two examples of languages solving the issue from the onset by including support for multi-methods. The idea is to separate methods from classes (which only contain data slots). As shown in the pseudo-code of the accompanying figure, methods are defined as independent functions with the same name, but differing in their arguments' types (in CLOS, a set of such methods is called a generic function). When a generic function is called, the system selects the actual method to be invoked using the types of all the arguments used in the invocation. The encounter generic function in our running example provides a typical example, as shown in the figure on the right. The benefits of having multi-methods at our disposal are apparent: the code is simpler and, notably, adding new behaviours and classes to the system does not need modifications of existing code. For instance, we can introduce a Piranha, which eats unhealthy sharks instead of swimming away from them, by defining the requisite class and methods, without any modification whatsoever to the already defined ones.

On the downside, we have still to deal with the complications associated with internal state. Enter the magic world of prototype-based systems.

The ultimate dynamic

If you like dynamic languages, chances are you'll find prototype-based system an almost perfect development environment. Prototype-based languages emerged as an evolution of Smalltalk with the invention of Self by David Ungar and Randall B. Smith during the late eighties. The key idea behind Self is noticing that, most of the time, class definitions needlessly coerce and complicate your object model.

A class definition becomes a contract to be satisfied by any instance, and it is all too easy to miss future or particular needs of your objects (class-based inheritance is just a partial solution to this problem, as shown, for instance, by the so-called fragile base class problem). But, if you look around you, objects change in internal behaviour and data content continously, and our attempts at distilling their Platonic nature are often in vain.

In prototype-based programming, instead of providing a plan for constructing objects, you simply clone existing instances and modify their behaviour by directly changing the new instance's slots (which provide uniform access to methods and state). New clones contain a pointer to their parent, from which they inherit non-modified slots: there is no way to access state other than via messages sent to instances, which simplifies tackling with state.

Class-based languages oblige you to keep two relationships in mind to characterize object instances: the "is-a" relationship of the object with its class, and the "kind-of" relationship of that class with its parent. In self, inheritance (or behaviour delegation) is the only one needed. As you can see, Self is all about making working with objects as simple as possible. No wonder Ungar and Smith's seminal paper was titled Self: The Power of Simplicity. Needless to say, a must read.


The figure above shows how our running example would look in selfish pseudo-code. As promised, state is no longer surfacing in our method implementation's logic. Unfortunately, we have lost the benefits of multi-methods in the process. But fear not, for, as we will see, you can eat your cake and have it too. Instead of pseudo-code, you can use Self itself, provided you are the happy owner of a Mac or a Sun workstation. Or you can spend 20 fun minutes seeing the Self video, which features the graphical environment accompanying the system. Like Smalltalk, Self provides you with a computing environment where objects are created, by cloning, and interact with you. The system is as organic and incremental as one can possibly get.

Of course, you're not limited to Self. For instance, Ken Dickey fleshed up Norman Adams' saying that objects are poor man closure's by offering a prototype-based object system in Scheme, and, more recently, Neil Van Dyke has released Protobj. And you have probably already used a very popular language in the family: Javascript. The list goes on, albeit, unfortunately, many of these languages lack either Self's nice integrated environment, or a portable, up-to-date implementation. Slate to the rescue.

The best of both worlds

Prototyping and multiple dispatch are, at first sight, at odds. After all, method dispatching based on arguments' type needs, well, a type for each argument, doesn't it? As it happens, Lee Salzman and Brian Rice have envisioned a way of combining the power of both paradigms into Slate. In fact, proving how this is possible is the crux of Lee's article. In addition, Slate aims at providing a complete development environment in the vein of Smalltalk or Self. Too good to be true? In future installments of this blog category, we'll see how and why it's true, but, if you cannot wait, just run-not-walk to Slate's site. You'll have a great time.

Tags: programming auld
19 Jan 2006

as simple as possible...

… but not simpler.

Einstein's (attributed) quotation has become an aphorism, taken for granted by every mathematician or physicist i've ever met (to mention two kinds of people i've been frequently involved with). One would expect the same attitude from a community that invented the term 'no silver bullet', and yet, since i got into computer science, first for fun and later on for a living, i've found lots of people with, er, a different viewpoint. Take for instance this excerpt from Lisp is sin, a widely cited and commented article by Sriram Krisnan:

In Visual Studio, we look at 3 distinct categories of programmers. We call them Mort, Elvis and Einstein - the 3 personas we take into consideration whenever we create any developer technology. What is difficult for some of us geeks to comprehend sometimes is - all 3 of them are equally important. When you look at technology like Windows Workflow Foundation, one of the driving forces was to let non-geeks build software. Not everyone needs to be a Raymond Chen or a Dave Cutler. Not everyone needs to understand the difference between the various GC algorithms. However, everyone needs the ability to be productive. And everyone needs to be able to get software working without needing a CS degree.

We cannot afford to restrict software development only to those who know Windows kernel internals or those who can understand what a continuation is. It's not that other people are not smart - they just have better things to do. That's the key piece of understanding I find missing sometimes.

Nonsense, if you ask me. And yet, i've been hearing this same argument, in different dressings, once and again since i got into computer science. Let's apply the same line of reasoning to other disciplines, and see how well it fares:

Hey Albert, your General Relativity is awesome but, you know, with all that jazz about differential geometry and curved spacetimes, it's too hard; we're not as smart as you, pal, so we'd better use Newtonian or Aristotelian mechanics to calculate those GPS satellite orbits and get going with other important things we need to do. Hope you understand, Albert.

Well Santiago, your ideas about neurons and surgery sound pretty deep and mystifying, but please, think of the Galens among us: we don't have the time to investigate every new fad, and, anyway, we wouldn't understand it if we did. Know what? We'll keep using our old good cures and stay away from untrodden venues. Our healing parchments are a bit of a hack, but they get the work done… most of the time, that is.

Does it make any sense? Now, maybe you think that i am exaggerating, and that the comparisons above are stretching the point a bit too far. If so, take a second to look back to the people that made your nice computing environment possible. Take a look at Charles Babagge visions; read about Alan Turing and Alonzo Church or John von Neumann; admire the elegance of McCarthy's original LISP (1960); prepare to be surprised with the things the people in Dough Engelbart's Augmentation Research Center were doing during the sixties; try to find a modern drawing program that matches Sketchpad's algorithms (or see it in action in this presentation by Alan Kay); follow the fascinating development of the overlapping windows interface, hand in hand with Smalltalk history back at Xerox PARC, and do it from the horse's mouth; feel the thrill of the people that went beyond Xerox's big wigs' shortsightedness and on to making a dent in the universe: it was 1984, that same year the lisp machine wars culminated in the creation of the GNU project, which was all about ideals, about empowering people, about freedom. When you're done, tell me whether i'm going overboard when making parallelisms between computer science and physics or medicine!

All those people had a vision, a dream, and pursued it with an amazing display of hard work, stubbornness and intelligence. They took no prisoners, and by the late eighties had pushed that new thing called, for want of a better name, Computer Science to its modern standards.

Then winter came. Not just the AI winter. Compare the swift pace of CS developments during the 1960-80 period with the subsequent advancements in the field. We're using the same metaphors, the same kind of systems that we inherited from those guys and gals. Why, we even envy the power of Lisp Machines these days. It's been a long, cold winter for CS. And the main reason was the appearance of the mentality that i'm criticising in this post, what Alan Kay aptly calls, in a recent interview, a pop culture of computers:

Perhaps it was commercialization in the 1980s that killed off the next expected new thing […] But a variety of different things conspired together, and that next generation actually didn't show up. One could actually argue—as I sometimes do—that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects. You could think of it as putting a low-pass filter on some of the good ideas from the '60s and '70s, as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

Dead on, i say. People advocating about making programming simpler than possible are the hallmark of this pop culture. And when corporate and economic interests enter the picture, things get even worse. The Lisp is sin essay goes on to say:

I frequently see on Slashdot "Windows is designed for stupid users". That is quite insulting to the millions of moms and dads, teachers and laywers and people from other walks of life who use Windows or even the Mac. If we mandated that every new user understand Windows' command line syntax or Emacs, we would have failed as an industry - we would have locked out the rest of the world.

In my opinion, this totally misses the point. There's nothing wrong in making computers simpler to users. On the contrary, that's probably what this endeavour is all about. Alan Kay saw it, Apple took head with its computer for the rest of us mantra. But it does not follow that there must be a CS for the rest of us. Making all this amazing technology possible takes effort, and needs a high level of sophistication. Alan didn't try to create systems usable by children inventing PHP. He created Smalltalk striving to improve Lisp, he studied Piaget and Papert, he has degrees in maths and biology. And he needed all that, and then more.

The (trivial) point i'm trying to make is that not everybody has what it takes to be a programmer. Just as not everybody can be a singer or a painter (as an aside, i tend to agree with the opinions that link programming and art). As a matter of fact, good programmers are rare and need a quite peculiar combination of skills and talents. Donald Knuth has put it far better than i could in the essay Theory and Practice, II (from his Selected Papers on Computer Science):

The most important lesson [after developing TeX], for me, was that software is hard; and it takes a long time. From now on I shall have significantly greater respect for every successful software tool that I encounter.[…] Software creation not only takes time, it's also much more difficult that I thought it would be. Why is this so? I think the main reason is that a longer attention span is needed when working on a large computer program than when doing other intellectual tasks. A great deal of technical information must be kept in one's head, all at once, in high-speed random-access memory somewhere in the brain.

We don't solve the painter's problem by complaining that perspective is hard to grasp and people should better use flat icons. In the same way, we shouldn't be claiming for a trivialisation of CS both in academia and in the industry. The we would have failed in the industry bit in the Sriram quote above is really sad: we're sacrificing an admirable legacy in the name of industry and corporate profit. The most remarkable feat of our current industry leaders is to have convinced the rest of the world that having software systems that eat incredible amounts of resources and explode without reason every now and then is part of an acceptable, even top-notch, technology. Fortunately, other disciplines show far more respect for the people that, ultimately, is paying their wages.

If you've got this far, you already have one of the qualities needed to become a programmer: stamina. You'll need more. Be prepared to study hard, to learn maths, to live in abstract worlds. If you feel that you have "more important things to do", well, that's all very well, but don't ask the rest of us to dumb down the subject so that everybody can be a programmer. Lisp is not a sin. The sin would be to betray the dreams, ideals and hard work of the people that have taken us this far. We owe that to them, and to ourselves.

To end this never-ending diatribe, let me add a couple of things: first, i should apologize for taking Sriram as the scapegoat to a long honed rage: his essay contains many good points worth reading and mulling on; second, i hope you're not thinking this is just an arrogant rant by an old fart: i'm not that old.

Tags: programming auld
14 Jan 2006

the joy of repl

Back in the old days i was a macho C++ programmer, one of those sneering at Java or any other language but C, willing to manage my memory and pointers and mystified by the complexity of the template syntax (it was difficult and cumbersome, ergo it had to be good). Everyone has a past.

Things began to change when i decided to add Guile extensibility to GNU MDK. I was using the project as an excuse to learn everything one has to learn to write free software, from parsers with flex and bison, to documentation generators like texinfo and doxygen or localisation via gettext. Next thing was scriptability, and those days Scheme was still the way to extend your GNU programs (last time i checked, GNOME was full of XML, a windows-like registry and, oh my, C#… no Scheme (or good taste) to be seen).

So, when i first encountered Scheme i was high on static type checking, object oriented programming in its narrow C++ flavour, and all that jazz. I didn't understand immediately what was so cool about having an interpreter, and defining functions without the compiler checking their signature at every single call made me feel uneasy. I was told that i still had strong type checking in Lisp, but that it is deferred to run time, instead of at the apparently safer compile phase. I didn't get it. Thanks god, SICP was so fun that i kept on learning, and i kept wondering for a while what was so great about interpreters and dynamic typing.

Problem was, i was writing C programs in Scheme. In a compiled language (a la C) and, to some degree, in any statically typed one, your code is dead. You write pages and pages of inert code. You compile it. Still dead. Only when you launch that binary does it come to life, only that it lives elsewhere, beyond your reach. Admittedly, i'm exaggerating: you can reach it in a convoluted way via a debugger. But still. A debugger is an awkward beast, and it will only work with the whole lot: all your program compiled, linked and whatnot.

Enter a dynamic language. Enter its REPL. When you have a, say, Lisp interpreter at your disposal you don't write your code first and load it later (that's what i was doing at first). You enter your code piecewise, function by function, variable by variable at that innocent looking prompt. You develop incrementally, and, at every single moment, your objects and functions are alive: you can access them, inspect them, modify them. Your code becomes an organic creature, plastic. Its almost not programming, but experimenting.

Maybe you're raising a skeptical eyebrow. Maybe you have one of those modern visual-something debugger that lets you modify your compiled code on the fly and continue running your code using the new definitions and you think that's what i'm talking about… Well, no, sorry, that's only part of what i'm talking about. To begin with, you continue executing your program. I can do whatever i want. But that's not all. We are talking about a dynamically typed language. That means that me and my little REPL have much more leeway to modify the living code, and thus much more margin to grow up and evolve the code.

At the end of the day, dynamically typed languages give me freedom. Programming is a creative process and greatly benefits from that freedom. At first, abandoning the safety net provided by static typing was a little bit scary, but as i grew up as a programmer i felt more and more confident, and gradually the initially uneasy feeling morphed into joy. The joy of REPL.

Richard P. Gabriel has made a far better job in beautifully conveying what i'm trying to express in his excellent introduction to David Lamkins' book Successful Lisp, entitled The Art of Lisp and Writing. Unfortunately, i haven't found it online – you can read the first few pages in's "look inside this book" section for this book. And also in his essay Do Programmers Need Seat Belts?. Paul Graham has famously argued in favour of bottom-up development in many of his essays, and specially in Programming Bottom-Up:

It's worth emphasizing that bottom-up design doesn't mean just writing the same program in a different order. When you work bottom-up, you usually end up with a different program. Instead of a single, monolithic program, you will get a larger language with more abstract operators, and a smaller program written in it. Instead of a lintel, you'll get an arch.

Finally, please note that i'm well aware that the static vs. dynamic typing debate is still open, and that decent type systems like those in Haskell and ML have, arguably, much to offer in the way to solid software engineering. Type theory has also a powerful and beautiful mathematical foundation. The above is just my gut feeling and current position on these issues, and i don't pretend to have backed it with solid technical argumentation. Nor was it my goal. I'm more interested here in programming as a creative activity than as engineering.

Tags: programming auld
Other posts
Creative Commons License by jao is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.