opening it up with Common Lisp

Favorite weblogs

Lisp Related

Bill Clementson

Finding Lisp

Lemonodor

Lispmeister.com

Planet Lisp

Politics

Orcinus

Talking Points Memo

This Modern World

Working for Change

Other home

Polliblog

Recent Readings

Book review: Darwinia
Reviewed: Friday, August 11, 2006

Summer reading: Spin
Reviewed: Saturday, August 5, 2006

Runner
Reviewed: Tuesday, July 18, 2006

the Omnivoire's Delimma
Reviewed: Wednesday, July 12, 2006

the Golem's Eye
Reviewed: Wednesday, May 31, 2006





tinderbox
 width=

RDF Triples in XML
Sunday, August 20, 2006

(After all, where else would you put them?).

Even after everything is in RDF, you still need to find someplace to put it and a way to write it down. Big disks answers the first question but the second has turned out to be surprisingly hard. Since XML has become the one (markup language) to rule them all, it was no surprise that people turned to it for the answers. Unfortunately, dealing with everything has a way of pulling in competing constituencies and those pesky subgroups tend to pull things into a muddle. This left RDF/XML of several diverse forms each very successful but each with its own problems. This paper presents another way to look at stuff that, as far as I know, has gone on to become quite successful.


IT Conversations
Sunday, August 20, 2006

  • Tara Lemmey talks about US security in the age of "terror". If these ideas are actually implemented, then things might become better. My guess, however, is that we'll just have more technology without thought scares.
  • Ray Lane talks about software and stuff stream of consciousness style at Software 2006. His stream is moderately interesting.
  • Elias Torres talks about the Semantic Web (RDF, OIL, SPARQL, oh my!) with Phil Windley. Torres does a good job explaining why we care though I can't help but hear shades of the "AI will solve everything" from the early 80's whenever I hear about the Semantic Web.

more "real" C++ macros via template metaprogramming
Saturday, August 19, 2006

If it didn't end up being so ugly, it would be cute.

Although this technique might seem like just a cute C++ trick, it becomes powerful when combined with normal C++ code. In this hybrid approach, source code contains two programs: the normal C++ run-time program, and a template metaprogram which runs at compile time. Template metaprograms can generate useful code when interpreted by the compiler, such as a massively inlined algorithm -- that is, an implementation of an algorithm which works for a specific input size, and has its loops unrolled. This results in large speed increases for many applications.

This is from a longer article by Todd Veldhuizen referenced by Scott Meyers (it's on the web somewhere but I seem to have misplaced it ).

Someone should create a language that lets you do this without having to jump through so many hoops. It could work per taking source code as inputs and writing out new source code with the whole power of the language behind the transformations. A language like that would blow C++ out of the water in terms of popularity. What's that. Oh, sorry. I went off my meds again.


SPAM on the lam? Not for me!
Saturday, August 19, 2006

It used to be that most of my SPAM went into junk. Lately, however, a great deal of stuff that looks as if it should be easy to categorize is ending up in my inbox. Grrrr.

I'm guessing that I should try retraining my SPAM filter (in OS X's Apple Mail) or switch to something like Michael's Tsai's SpamSieve. I used SpamSieve once before and liked it -- but why pay when Apple Mail seemed to be doing as good (or at least nearly as good) of a job? Is anyone else experiencing this?


Feedback: good and bad
Friday, August 18, 2006

It's ironic I received both high praise and an indictment for my Lisp software on the same day! Both, however, were deserved. Well, I know I deserve the indictment and I'm pretty sure that the praise is justified too. First, I heard that CL-Markdown's new extensions mechanism

I was able to complete a project today, just in time, and leave for seven days of vacation tomorrow. This wouldn't have been possible in that form without markdown extensions...

That cheered me up. Later in the day, however, I heard that:

... most of the web pages for your various lisp packages need to be updated. You don't list nearly all of the required dependencies. ... I'm so annoyed from having to go through the discovery process that I'll let you discover which pages are inadequate.

Which is quite true. I suppose it would be sort of OK if I didn't list any dependencies but only listing some of them really opens the door for frustration. Besides, it's easy to pull all of this stuff and include it in the web pages automatically.

Things to do:

  • Switch from lml2 to CL-Markdown for my web site
  • Add CL-Markdown extensions to include the dependencies automatically
  • Improve the Enterprise Lisp system pages so that they list this stuff automatically too
  • Keep on keeping on.

Thanks for both the kind words and the less kind ones. I want Lisp to succeed and that's only going to happen if the barriers to entry become ever lower.


RIFE
Friday, August 18, 2006

I just heard about RIFE which sounds like RAILS only it's Java, etc... The nice thing is that Lispers could start working on LIFE which has gotta be good, right? The linked interview has an interesting comment differentiating Rails from RIFE:

Some people criticize RIFE for not being like Ruby on Rails, where you can just start coding and have something up and running.

...

Why? Because we've been bitten so many times by the maintainability issue. We think that declaring certain things beforehand makes your application easier to maintain—for instance, it forces you to think of how state is being managed in your application.


Update: CL-Markdown again
Monday, August 14, 2006

Thanks for Frank Schorr for noticing that CL-Markdown sometimes failed to produce output. Actually if it failed once, it never stopped failing but when it succeeded, it never failed. It was a file compilation order dependency problem that didn't occur in my usual development environment. I'd been thinking that having a system compile an ASDF system in every order allowed by the dependencies would be a good way to catch certain bugs... it would have caught this one (assuming that a lot of additional infrastructure was in place <smile>).

When, on when, will someone kick me in the butt so that I can get back to Enterprise Lisp? New files are up on Common-Lisp.net.


the Human Condition
Sunday, August 13, 2006

I like conditions. I like them a lot. I like to use them to describe bad program states instead of strings. (e.g., (assert (ok-p state) nil 'it-is-not-ok-condition :state state) instead of (assert (ok-p state) nil "It is not ok")). I like them because you can easily test whether or not they happened programmatically and because using them helps to centralize error messages -- and thus gives me hope that I can make them consistent. There is a catch though: define-condition is a prolix form. Thus Metatilities has long had defcondition to make writing them a bit easier and yesterday I wrote a sibling macro (with the same name) to handle my most common case. The macro is simplicity itself:

(defmacro defcondition (name (&rest super-conditions) 
			slot-specs format &body args)
  (flet ((massage-slot (slot-spec)
	   (cond ((atom slot-spec)
		  `(,slot-spec 
		    :initarg ,(read-from-string (format nil ":~a" slot-spec))))
		 (t
		  slot-spec)))
	 (massage-format-arg (arg)
	   (cond ((atom arg)
		  `(slot-value condition ',arg)) 
		 (t
		  arg))))
    `(progn
       (export '(,name))
       (define-condition ,name ,super-conditions
	 ,(mapcar #'massage-slot slot-specs)
	 (:report (lambda (condition stream)
		    (format stream ,format 
			    ,@(mapcar #'massage-format-arg args))))))))
  

and this lets us write

(define-condition record-number-too-large-error
    (invalid-record-number-error)
  ((record-count :initarg :record-count))
  (:report 
   (lambda (condition stream)
     (format stream 
	     "Record number ~a is too large for this store. Store size is ~a."
	     (slot-value condition 'record-number)
	     (slot-value condition 'record-count)))))

as

(defcondition record-number-too-large-error
    (invalid-record-number-error)
    (record-count)
    "Record number ~a is too large for this store. Store size is ~a."
  record-number record-count)

(the record-number slot is inherited). It's not much but its enough to make writing simple condition like this almost as easy as writing an error message as string. Programming environments are (in part) about reducing impedance and about making it easy to do the right thing.


Book review: Darwinia
Friday, August 11, 2006

I very much enjoyed Robert Charles Wilson's Spin and I was looking forward to reading Darwinia. I'm glad I read them in the order I did because I thought Spin was superb and Darwinia both trite and silly. The characters are well drawn and relationships matter but the central premise -- that earth (and the rest of the universe) is essentially a library book come to life and is rewriting itself because of the attack of computer viruses on steroids; that everything we know is composed of a vast cosmological battle between sentience and these viruses... well, it's an interesting idea.

Darwinia feels as if it could have been great but it isn't. It feels tacked together from several weaker ideas and grand notions and never quite evolves beyond itself. Oh well, you can't always write a masterpiece.


Update: CL-Markdown
Tuesday, August 8, 2006

I've corrected another few bugs in CL-Markdown and improved the extensions a bit.

  • Fixed error in the processing of multiple bracketed things (e.g., [a][b] and [c][d]).
  • Improved table of contents processing.

It actually works pretty well (I suppose I shouldn't be surprised! Smile)


:explain that :box, will you?
Tuesday, August 8, 2006

One of Common Lisp's darker arts is optimizing functions. It's a beautiful idea: don't worry (too much) about performance until you've found the critical sections of your code; then tell the compiler what you'd like and, voila, faster code. Since compilers differ, however, and since you're never quite sure what incantation is going to convince Lisp to do "the right thing" (*), you're often left flipping declarations around like hot potatoes.

Enter Allegro Common Lisp, version 8.0 I've been working lately (for money even!) in ACL and one of the wonderful new features is that you can ask the compiler to tell you what it thinks and what it wants. I've wanted this ability for a long time (I even talked about it at ILC 200? in NYC). Let the programming environment notice that such and such a function is suboptimal and tell me that if I scratch it's back (with a little more information), then it can scratch mine (with a little more performance). ACL's explain feature isn't quite that, but it's a great step in that direction.

(*) I.e., what you want it to. Joke.


Cellular signposts
Tuesday, August 8, 2006

I suffer from psoriasis (and it's not just a heartbreak!). One of the many interesting questions about this disease is why this little bit of skin is undergoing massive immune mediated wonkiness (MIMW) and this bit of skin next to it thinks that everything is normal. This article (which I found via the American Scientist) gives part of an explanation. Cool.


Perception, reality, etc.
Tuesday, August 8, 2006

Java, Python, Ruby guru Mark Watson points out that he feels more productive using LaTeX rather than Word and this reminds me of the whole "is incremental spell checking really a good thing" debate (that I've had with myself annually since 2003; don't be too surprised if you haven't heard about it; my ex-agent failed to get the big network contracts he promised. Which reminds me, isn't stream of consciousness interesting. I suppose it depends on the consciousness... <smile>).

At question is what work style and tools help humans find maximum productivity and creativity. Is is better for a writer to work with pencil and paper -- forced to produce marks on paper at a rate slower than thought -- or with a word processor that highlights spelling, grammar, and logical inconsistencies (I made that last one up) on the fly? Is it better for a programmer to submit batch jobs to a mainframe or have background processes constantly checking the source for problems? The truth is probably a muddy thing. My opinion, however, is that the best (in terms of productivity, creativity, and flow) is highly context dependent and is often at odds with what feels psychologically most productive.

We need time to think; we need (metaphorical) quiet to get into the flow. A tool that shouts "spelling error" with every typo is providing more distraction that help. Yet it feels good to make those red squiggles disappear -- "We're making progress!" -- it feels good to play with fonts and shift the format. A computer is a far different beast that Heidegger's hammer: it can be at hand in many ways simultaneously. It's up to us to ensure that the tools we use are optimizing the important tasks, not the trivial ones.


Summer reading: Spin
Saturday, August 5, 2006

I'm sure many of my readers are saying to themselves (quietly so as not to attract attention to themselves) "It's summer, why isn't Gary reading?". Well say no more, squire, say no more. I have been reading, I've just been keeping quiet about it.

Spin is a wonderful book by the novelist Robert Charles Wilson. Normally, I worry about people with multiple first names -- seems to me that it must leave them confused on some deep, deep level. Of course, I used to think that the group Tony Orlando and Dawn was composed of three people named "Tony," "Orlando," and "Dawn"... it made sense to me. Enough about my neuroses, however, let's talk about the book.

Spin is science fiction in the best sense -- interesting science (yes) and interesting fiction (of course) but the heart of the book is the characters; not the plot nor the gizmos. The "spin" is an event caused by forces unknown for reasons unknown that leaves the earth wrapped in a shell. The stars and moon are no longer visible and satellite communication is impossible. The sun, however, still appears to rise and give warmth. Life goes on. Eventually, it's discovered that time on earth is moving more slowly than that of the universe outside the shell; much more slowly. Weeks on earth correspond to millennia. This means that the sun will consume its supply of hydrogen fuel, enter senescence and grow to consume the earth in less than a century. But why wrap the earth at all? Who is doing this? Is it the end of the world? Is it the rapture?

The book tracks the responses of three friends and the rest of the world to these questions. Wilson writes elegantly and eloquently; beautiful and at times heart rending phrases enliven the interplay of plot and character. In the end, science matters less than people and the hardest gap to cross is not the one between the stars but the one between self and other and, sometimes even less permeable, that between self and self.

Highly recommended for enjoyable science, wonderful writing and beautiful story.


Functional programming fun from Joel
Tuesday, August 1, 2006

Joel Spolsky writes well. He writes like a fish swims. Though not as glib as Paul Graham he still leaves an earnest yet unctuous film around the eyeballs and a bit of a grin around the mouth. In any case, he provides a great lead in for why MapReduce is way cool.


CL-Markdown extensions - first steps
Tuesday, August 1, 2006

I've made a little time to push on the CL-Markdown extension mechanism I wrote about a week or two ago. I've added a few new glitches but the change was less painful that I thought it might be (CL-Markdown swings far towards the organic and messy side of the code I write!)

In any case, the text below (written in Markdown format and parsed by CL-Markdown) describes how to write and use CL-Markdown extensions. The output isn't perfect (there are some glitches with quotations marks in code blocks for one thing), but it's pretty darn good if you ask me... (I know, biased, biased, biased... just like that SCLM). Let me know what you think!

CL-Markdown extensions

CL-Markdown aims to mirror the syntax of John Gruber's Markdown language (and it's getting there slowly!).

the Syntax

CL-Markdown uses { and } as new syntax markers. A single pair of curly braces wraps a function call whereas a double pair denotes a sort of wiki-like link. A function calls looks like:

\{function-name [function-argument]*\}

a wiki-like-link looks like

\{\{ syntax as yet to be determined \}\}

Function calls: { and }

Calling extension functions requires three things:

  1. writing (or finding) the extension that you want
  2. telling CL-Markdown that you want to use the extension
  3. writing your Markdown text with calls to the extension

The last part is the easiest; all you need to do is open a curly brace, type the name of extension function, type in the arguments (separated by spaces) and type a closing curly brace. For example:

"{now}" will generate the text "12:23".

The second step is necessary because CL-Markdown won't recognize functions as functions unless you tell it to up front. After all, you wouldn't want to allow people to execute arbitrary code; it might be a security risk (smile). Because CL-Markdown operates in two stages, there are two times when functions can be called: during parsing and during rendering. Functions active during these stages are keep in the special variables *render-active-functions* and *parse-active-functions*.

An example maight make this clearer. First, we'll call Markdown without specifying any functions:

? (markdown "Today is {today}. It is {now}." 
  :format :html :stream t)
<P>
Today is 
; Warning: Inactive or undefined CL-Markdown function TODAY
; While executing: #
<STANDARD-METHOD RENDER-SPAN-TO-HTML ((EQL EVAL) T)>
. It is 
; Warning: Inactive or undefined CL-Markdown function NOW
; While executing: #
<STANDARD-METHOD RENDER-SPAN-TO-HTML ((EQL EVAL) T)>
. 
</P>

As you can see, the functions weren't ones that Cl-Markdown was ready to recognize, so we got warnings and no text was generated. If we tell CL-Markdown that today and now should be treated as functions, then we see a far prettier picture:

? (let ((*render-active-functions* 
         (append '(today now) *render-active-functions*)))
    (markdown "Today is {today}. It is {now}." 
        :format :html :stream t))
<P>
Today is 1 August 2006. It is 11:36. 
</P>

By now, we've seen how to include function calls in CL-Markdown documents and how to generate those documents with CL-Markdown. The final piece of the puzzle is actually writing the extensions.

Writing Cl-Markdown extensions

There are several ways to write Cl-Markdown extensions. The easiest is one is to write functions active during rendering that return the text that you wish to be included in the document. For example:

(defun today (phase arguments result)
  (declare (ignore phase arguments result))
  (format-date "%e %B %Y" (get-universal-time)))

The format-date command is part of metatilities; it returns a string of the date using the C library inspired date format. This string is placed in the document whereever the function call ({today}) is found.

Alternately, one can use the *output-stream* variable to insert more complicated text. This would look like:

(defun now (phase arguments result)
  (declare (ignore phase arguments result))
  (format *output-stream* "~a" 
    (format-date "%H:%M" (get-universal-time)))
  nil)

(Note that now returns nil so that the date isn't inserted twice!).

The other alternative is to use your function calls to alter the structure of the CL-Markdown document and then let Markdown deal with some or all of the rest. The anchor extension provides an example of this:

(defun anchor (phase &rest args)
  (ecase phase
    (:parse
     (let ((name (caar args))
           (title (cadar args)))
       (setf (item-at (link-info *current-document*) name)
             (make-instance 'link-info
               :id name :url (format nil "#~a" name) 
               :title (or title "")))))
    (:render (let ((name (caar args)))
               (format nil "
<a name='~a' id='~a'>
</a>
"
                       name name)))))

Anchor makes it easier to insert anchors into your document and to link to those anchors from elsewhere. It is active during both parsing and rendering. During the parsing phase, it uses it's arguments to determine the name and title of the link and places this into the current document's link information table. During rendering, it outputs the HTML needed to mark the link.

An even more complex example is the table-of-contents extension:

(defun table-of-contents (phase &rest args)
  (bind ((arg1 (ignore-errors
                (read-from-string (string-upcase 
                                   (first args)))))
         (arg2 (ignore-errors
                (parse-integer (second args))))
         (depth (and arg1 (eq arg1 :depth) arg2)))
    (ecase phase 
      (:parse
       (push (lambda (document)
               (add-anchors document :depth depth))
             (item-at-1 (properties *current-document*)
                        :cleanup-functions))
       nil) 
      (:render
       (bind ((headers (collect-elements
                        (chunks *current-document*)
                        :filter
                        (lambda (x) (header-p x :depth depth)))))
         (when headers
           (format *output-stream*
                   "
<div class='table-of-contents'>
")
           (iterate-elements
            headers
            (lambda (header)
              (bind (((index level text)
                      (item-at-1 (properties header) :anchor)))
                (format *output-stream* "
<a href='#~a' title='~a'>
"
                        (make-ref index level text)
                        (or text ""))
                (render-to-html header)
                (format *output-stream* "
</a>
"))))
           (format *output-stream* "
</div>
")))))))

Because we can't generate a table of contents until the entire document has been parsed, the table-of-contents extension adds a function to the cleanup-functions of the current document. Cleanup functions are called when parsing is complete. The add-anchors functions adds additional chunks to the document before each header (down to some fixed depth). These anchors can then be used by the rendering phase of the table-of-contents extension to link the headers to the sections in the document.


Update: ASDF-Binary-Locations
Tuesday, August 1, 2006

ASDF-Binary-Locations now has another way to customize output locations thanks to Erik Enge. Erik's patch recognizes that sometimes you want to specify output location based on who is currently using the machine. Thus we have the new variable: *include-per-user-information*.

When *centralize-lisp-binaries* is true this variable controls whether or not to customize the output directory based on the current user. It can be nil, t or a string. If it is nil (the default), then no additional information will be added to the output directory. If it is t, then the user's name (as taken from the return value of #'user-homedir-pathname) will be included into the centralized path (just before the lisp-implementation directory). Finally, if *include-per-user-information* is a string, then this string will be included in the output-directory.

Enjoy.


Links to things I thought were cool
Sunday, July 30, 2006

After all, isn't that what the web is all about?

  • This could be the start of something really great: "In simplest terms, a way to fund high-quality, original reporting, in any medium, through donations to a non-profit called NewAssignment.Net."

  • I'm not sure if this is a good use, but it's better than nothing... "One of [Alex Dragulescu's] more notable projects involved creating what he calls Spam Plants. He wrote algorithms that analyzed various text and data points of junk e-mail to produce "organic" images of plantlike structures that spontaneously grew based on incoming spam."

  • Search is almost always interesting: here is a roundup of several "new" ideas

Some CL-Markdown design questions for public review
Tuesday, July 18, 2006

I've been using Peter Seibel's Markup (see also here for a fork by Cyrus Harmon) lately for a project. If you've seen LaTeX then Markup will be familiar. The nicest thing about it is that Peter has made it really easy to extend the language with your own commands. It's also very well integrated with Lisp and with Peter's other tools. On the other hand, it's not Markdown. This means that Markup documents tend to look like documents with Markup and that there is no convenient round trip from Markup to HTML and back again. Now I'd like to be able to say that CL-Markdown provided Markdown's extremely readable syntax with Markup's cool customization but I can't. At least, not yet. Here then, are my thoughts on the matter. Please let me know if anything resonates positively or, perhaps even more importantly, negatively with you.

First, what do I want to support?

  • I'd like an easy to way to associate properties with a document and to use those properties later,
  • I'd like to add some Wiki-link syntax to make using Markdown in a Wiki a bit more natural,
  • I'd like to add simple customization so that one can define commands in Lisp and invoke those commands at well known points during document parsing and production, and
  • I'd like the syntax to seem natural so that it doesn't get too much in the way of normal writing and reading.

Second, what do I propose:

  • I'm intending to make #\{ and #\} special characters (you can always escape them if you need to use them for more pedestrian purposes). Stuff between a pair of curly braces will be treated as a Lisp function invocation where the first word denotes the function and the rest of the words denotes arguments. Here are some examples.

    • {table-of-contents} - add a table of contents (based on H1, H2, and so forth headings). If you only want to go down to H3, then use {table-of-contents :depth 3}.

    • {set-property document-name "general"} - sets a property. In this case, document-name would be a well defined property that controls the name of the output file. You can use {property name} to retreive already set properties.

    • {function document-system} - used to output documentation for the function named 'document-system (cf. Tinaa)

    All of these examples presuppose that I (or someone) has written commands to accept and process them. I think that this mechanism handles about 90% of what I want. The only missing bit is Wiki-links.

  • For Wiki-links, I'm thinking of using double brackets. E.g. {{Another page}} would like to "Another%20Page". But one also wants to be able to specify different text and I think something like {{Another page} different text} would do fine. The syntax starts to get a bit overloaded but I think that even {{Another Page} different text :title This leads to another page :class funky-link} would be fine.

I'm not sure when I'll get to this... Soon, I hope but let me know (gwking@metabang.com) when you see something I've missing. Thanks.


Runner
Tuesday, July 18, 2006

William C. Dietz's Runner is decent science fiction: decent plot, decent characters, decent technology, decent writing. Everything about the book was, well, decent. If it sounds as if I'm damning with faint praise it's because I am. This was a book I finished only because it's the sort of book I can read very fast. There was nothing particularly bad about it but also nothing particularly good. The characters were mostly believable but there was nothing organic about the. The writing was passable but surely not memorable (no phrase savoring here, move along). The plot was a standard "make a journey and pick up allies and enemies and learn things about yourself on the way" and the science was neither deeply explored nor central to the plot. Maybe you'll love it, but there are better books out there to read.


Update: trivial-shell
Monday, July 17, 2006

Trivial Shell now works with CMUCL. Thanks to Satyaki for the patch.


Logo Wiki / Code Wiki
Friday, July 14, 2006

(Via Paolo Amoroso) The Logo Wiki is a great example of a code wiki. It needs some better version control of the code (in case of mistakes and typos) and I'm not sure how it deals with spam, etc. Still, it's fun and it makes it really easy to get a feel for Logo itself -- it helps that one of Logo's was (is?) teaching programming and that the interface is primarily a turtle carrying a pen!

Paolo Amoroso mentions that a Lisp variant would be even cooler and he's right in a way. I'm not sure that the examples would be as fun though.


the Omnivoire's Delimma
Wednesday, July 12, 2006

Dreams of innocence are just that; they usually depend on a denial of reality that can be its own form of hubris.

I'll try to avoid the obvious food puns but Michael Pollan's the Omnivore's Dilemma is a rich dessert of elegant verbal treats, sweetened with thoughts both philosophical and political.

(Quoting local food advocate Joel Salitin) "It's all connected. This farm is more like an organism than a machine, and like any organism it has its proper scale. A mouse is the size of a mouse for a good reason, and a mouse that was the size of an elephant wouldn't do very well."

Pollan's four meals (the industrial, the agrarian, the organic and the gathered) clearly delineate some of our possible relationships with food; what's more, it's clear that the most common relationships are the worst for us and the health of our shared biosphere.

Our food system depends on consumers' not knowing much about it beyond the price disclosed by the checkout scanner. Cheapness and ignorance are mutually reinforcing. And it's a short way from not knowing who's at the other end of your food chain to not caring--to the carelessness of both producers and consumers.

This lack of knowledge -- and indeed the (apparent) strong desire of the powers that be to deny knowledge, crush and impede the Freedom of Information Act, pretend that processed food is just like its real counterparts, distract us with (wonder) bread and circuses, and with fear -- is what worries me. I believe that people will do the right thing (most of the time, by and large) if they know what that thing is and if it's not too hard to do it and if doing it is visible and engenders positive feedback loops. Mass industrial capitalism does not support a human economy of scale.

Regardless of that, however, this is a great book and I enjoyed every minute of my reading!


Update: ASDF-Binary-Locations
Wednesday, July 12, 2006

Thanks to Joshua Moody, ASDF-Binary-Locations now knows something about 64-bit Allegro Common Lisp.


Update: CL-Markdown
Wednesday, July 12, 2006

I've made several small tweaks to CL-Markdown in an effort to bring it a bit closer to its Perl cousin. In particular:

  • I finally began to add to support for escaped characters (e.g., you can now say \*hello\* to output *hello*).
  • Corrected a few edge cases and missing bits of support for links
  • Many some very small improvements in the handling of Markdown coding with code blocks (it's not complete, but every step is a step).

As usual, CL-Markdown has a host of dependencies. You may want to try System-check to see what needs to be updated.


Mark Watson on Global Warming
Saturday, July 8, 2006

Lisp and Java consultant Mark Watson on global warming:

BTW, the solution to global warming (or at least improve the situation) is simple: increase the tax on the use of carbon based fuels while decreasing sales and income taxes. Make it economically viable to create new technologies and industries that reduce our energy and environmental damage "foot prints".

He's right. But America is addicted to oil, air conditioning and cheap corn so we know it won't happen.


Another step towards ASDF system understanding
Thursday, July 6, 2006

As usual, progress is slower than desired or expected and my reach seems to ever exceed my grasp (I think it has something to do with the anatomy of my hand...). Nonetheless, there are now some spiffy system dependency graphs up at enterpriselisp.com. Not surprisingly, CL-PPCRE's is one of the most complex. In case it's not clear (and it probably isn't!), you can navigate between systems by clicking on the graph vertexes. Also, the graphics are all SVG so you'll need a browser that knows how to speak it. More web phun is in the pipeline.


ASDF turns 100
Wednesday, July 5, 2006

I just had the honor of checking in the 100th revision of ASDF. Cool!

The newest feature is the addition of load-preferences and preference-file-for-system/operation. These generic functions can be specialized on the system and the operation.

The out of the box behavior of load-preferences is to do nothing except in the case of a load-op or a load-source-op. For either of those operations, load-preferences calls preference-file-for-system/operation to get a pathname. If the pathname returned exists, load-preferences loads it.

By default, preference-file-for-system/operation returns ~/.asdf/<name-of-system>.lisp. It's been said that an example is worth a 5.19 pictures so here are my preferences for asdf-binary-locations (in /users/gwking/.asdf/asdf-binary-locations.lisp):

(in-package asdf)
(setf *default-toplevel-directory*
      "/users/gwking/temporary/fasls/")
(setf *centralize-lisp-binaries* t)
;; force SBCL things to stay in SBCL
(setf *source-to-target-mappings*
      '(("/usr/local/lib/sbcl/" nil)
        ("/usr/local/lib/sbcl0.9.9" nil)
        ("/usr/local/lib/sbcl0.9.7" nil)))

These put all of my compiled files into sub-directories of ~/temporary/fasls (except for SBCL stuff which stay where they are expected).

The nice thing about this is that the preferences are loaded after the system whose preferences are being set is loaded. This is nice because it's hard to set preferences for a system that doesn't exist yet (because, for example, the home package of the variables you'd like to set isn't there).

(Thanks to the CCLAN and especially Christophe Rhodes who provided valuable feedback! Any remaining errors are probably the fault of either the Bush or the Clinton administrations. Or both of them together).


Those wacky biologists...
Tuesday, July 4, 2006

A team from the University of Ulm in Germany and the University of Zurich set a band of Saharan desert ants out questing for food. They eliminated direction as a factor by sending the ants through a long, straight tunnel. Once the ants reached the food, the scientists gathered them up and made some amendments to their legs: Some of the ants were fitted with stilts, while others had their appendages partially amputated.

Actually, I think that they had a great idea! Calling it "counting", however, reads more into the data than is available. Many animals are known to count accurately and to keep track of things like rate, etc. (Charles Gallistel's book is a wonderful read if you're interested in learning much, much, more!) but just because a human would solve a problem using solution X doesn't mean that an animal uses X too (even if we can't think of any other method to use than X...).


Updated: trivial shell
Monday, July 3, 2006

I fixed a typo in trivial-shell (sb!ext instead of sb-ext... go figure).


July Fourth in America
Monday, July 3, 2006

With apologies in advance for the politics... Howard Zinn says it for me:

On this July 4, we would do well to renounce nationalism and all its symbols: its flags, its pledges of allegiance, its anthems, its insistence in song that God must single out America to be blessed.

There's probably a bad joke here about Lisp being marginalized and America becoming increasingly marginalized -- or something -- but I don't want to think that hard.

... nationalism is given a special virulence when it is said to be blessed by Providence. Today we have a president, invading two countries in four years, who announced on the campaign trail last year that God speaks through him.

We need to refute the idea that our nation is different from, morally superior to, the other imperial powers of world history.

We need to assert our allegiance to the human race, and not to any one nation.

Transparency. Openness. Aim for the heights. Think the best of people. Do good and be nice.

keep peace.


Updated: CL-Graph
Saturday, July 1, 2006

Thanks to Attila Lendvai for some new patches to CL-Graph. He improved the GraphViz support and added some &allow-other-keys to avoid some warnings. I also just added the #'delete-all-edges generic function.


Two papers on Racer
Thursday, June 29, 2006

The semantic web is one of those great ideas that hasn't quite jelled. Eventually, the infrastructure, specifications and plans that are welling up out of the web's interstices are going to come together into something beautiful and strange. One of the droplets slouching towards us is Racer: a reasoner that takes all the little factoids expressed in RDF, DAML+OIL, OWL and so forth and derives consequences. These two papers (Racer: A Core Inference Engine for the Semantic Web and Racer: An OWL Reasoning Agent for the Semantic Web) share much of the same text so I'm describing them together. They provide a very high level overview of the various languages used to express semantic web knowledge (the acronym salad I listed above) and how Racer can use them to explore consistency, sub-classing (and other relationships), role filling and so forth. The papers describe several actual systems (e.g., RICE: Racer Interactive Client Environment, OilEd, and DIG). Neither paper is particularly useful in terms of giving compelling examples of how the semantic web is going to make our lives better (they give features not solutions) but they're a start.


#+Ignore is fine, #+(or) is bliss
Thursday, June 29, 2006

Although Common Lisp has a multiline-comment facility (that even handles nesting), many Lispers tend to use the #+ / #- reader macros to temporarily remove chunks of code from the view of the evaluator (note that these do not remove the code from the view of the reader, an issue which often leads to confusion the first time you can't load code that you think is 'commented' out — which is why it's important to remember that #+/#+ is not a commenting facility even though you may be using it like one...). The reason for this in my case is simple: laziness. I'd rather do something in one place than in two and it's easier to place (and remove) #+ignore in front of a form rather than wrapping the form in #| and |#. Frankly, I think that this sort of laziness is part of the human condition. If we really want most people to do the right thing most of the time, then we need to make it easy and convenient to do the right thing (e.g., recycle, carpool, keeping documentation up to date, and so on).

Using the reader macro to hide code isn't a bad practice as practices go (it's certainly healthier than coding in C or C++ <smile>) but lurking in the back of every coder's mind is that horrible question: "What if," they say to themselves in the still hours of the night, "What if someone pushes :ignore onto the *features* list?" (or :no, :later, :old, :new, :nil, :never, :not-yet, ...)? Well, I was looking at some of Peter Seibel's code recently and found a very nice way to have my cake even after eating it. There is form that is guaranteed to return nil regardless of what is on *features*? That form is #+(or) and it's a very nice thing indeed.


Microsoft is sharing code
Tuesday, June 27, 2006

I think that it will be interesting to see how this functions. Microsoft certainly has the resources to make it a useful code garden but I'm not sure if the political will is really there (shared source isn't open source). Coincidentally enough, one of the features I'd like to support at Enterprise Lisp is the ability to share Lisp code easily and with structure. The CLiki and Lisp Paste supply a good base but I think that a bit more infrastructure could make a bit difference in usability and utility.


Updates; E8EL is live
Tuesday, June 27, 2006

Thanks to the Tech Co-op, Enterprise Lisp is live and on-line in what I hope will be it's permanent home. System-check has been updated to version 0.1.6 to accommodate the new home.


Pretty wonderful (if you ask me...)
Saturday, June 24, 2006

Little brown dress:

I am making one small, personal attempt to confront consumerism by refusing to change my dress for 365 days.

I heard the artist interviewed last night on WAMC. I think it's great!


Ain't soccer grand
Saturday, June 24, 2006

Dynamic, fluid, a constant reinterpretation of tactics and strategy... like Life, like Lisp.


Sparklines + AJAX = nice
Thursday, June 22, 2006

Sparklines are little graphs that run in-line with your text. I believe that Edward Tufte invented them and I know that he talks about them in his upcoming book, Beautiful Evidence. Several folks have written code to create them (I think that there is even a Lisp implementation out there somewhere for CL-PDF?). Joe Gregorio has a nice Python AJAX web application if you want to play with them yourself.


How come Skype and Apple's Address Book don't play nice?
Thursday, June 22, 2006

Skype stores it's own contact data. How 1990s.


Common Lisp and RSS
Thursday, June 22, 2006

I've just started to start to work on an RSS feed for the Enterprise Lisp System Checker. The thing is, I can't find any Lisp libraries that make generating RSS feeds as easy as I think it should be (Kevin Rosenberg's CL-RSS is still available but seems unmaintained and only generates and parses RSS 0.92 feeds). So, in typical lisp hacker fashion, I wondered if I should first write a simple RSS library; something that can parse and generate RSS 0.92, RSS 1.0, RSS 2.0 and Atom feeds. If I did, there are two obvious designs: a object based one and a language based one. In the former, there would be classes for feeds and items and the usual things would happen to create, parse and generate feeds. In the later, RSS feeds would like like an XML template language (e.g., LML2, YACLML, and so forth).

The language approach would be conceptually simple but provides no useful abstractions (and no help in parsing). I think that templates make sense for a language like HTML because HTML content can be just about anything. RSS, on the other hand, always looks like a channel description and a list of items. The template just doesn't help that much. So on to objects and classes and such...


System Checker
Friday, June 16, 2006

The Enterprise Lisp system's checker has reached a stable point and I'm also happy with system-check. Unless I hear otherwise, the next steps will be (in no particular order):

  • Adding a Checker System status report - is it running, how many requests has it received, etc. Aside from helping me to know what things need focus, this information should be helpful to library writers and developers: what implementations are being used? How often are libraries changing, and so forth.
  • Adding an explain mode to system-check. Currently, update can tell you which of your libraries are out of date but I think that it might be helpful to see exactly which files have changed.
  • Creating an RSS feed to track which libraries are being updated. It might be nice to allow Lispers to register and customize the feed but I don't think I'll worry about that for a bit yet.

Once these features are stable (and once I've moved operations to a stable system), then I'll get back to ASDF-Install-tester / ASDF-Status and begin integrating them into Enterprise Lisp.


more various updates
Friday, June 16, 2006

ASDF-Binary-locations now knows about 64-bit OpenMCL (thanks to Joshua Moody for the heads up and the patch).

I've been having some troubles keeping the system currently hosting Enterprise Lisp up and running (those darn kids <smile>). I'm also ready to move over onto a virtual Xen thing run by the nice folks at GrokThis.

I've also improved the systems report slightly so that it is clearer what the systems checker found amiss. There are now five kinds of errors:

  • System not available - the tarball the CLiki points to is a 404
  • System file error - the tarball couldn't be digested for some other reason
  • Signature not available - the GPG signature file is a 404
  • Signature file error - the signature file made the checker unhappy for some other reason
  • Extraction Error - Something went amiss (obviously, this category needs to be broken out)

Note that signature problems can be restarted around (especially if you have the latest ASDF-Install) so it is possible to have a valid system and a valid signature even if the system status is not OK.


various updates
Thursday, June 15, 2006

  • CL-HTML-Parse - now happy for Allegro modern Lisp
  • CL-Graph - fixed bug in find-edge-between-vertexes that caused an infinite loop when both vertexes were not in the graph (Thanks to Joshua Moody)
  • CL-Containers - fixed bug in item-at-1 for alist-containers
  • ASDF-Binary-locations - now uses the file ~/.asdf/asdf-binary-locations.lisp to read preferences. This occurs after the code has been loaded and makes setting things like *source-to-target-mappings* a bit cleaner (IMHO). The pathname is created using:
(merge-pathnames
 (make-pathname :name "asdf-binary-locations"
                :type "lisp"
                :directory '(:relative ".asdf"))
 (truename (user-homedir-pathname)))

More why? (not Y)
Wednesday, June 14, 2006

I've received some follow-up on my "Why isn't Lisp da bomb for web development" post from last week. Since the CLiki isn't the place to talk about this kind of thing, I've made a little wiki at Infogami for it: lispandwebdev. Without going into details, the nub of the problem may be that Lispers are busy writing great stuff for the few whereas the Ruby on Rails folks et. al. are catering to the plebeian masses. I don't want to get sucked into language wars or anything like "my language is better than your language" or even the wistful "if only they could see it, then they would understand that Lisp is the way"... I also don't want to disparage the existing Lisp frameworks like BNKR, Lisp on Lines, UCW, the others I've forgotten and so forth. At issue, I think, is the sort of vitriol that leads to disasters like this (also noted by Stefan Scholl). This is happening very publically on Reddit and many people will have their prejudices created or confirmed that Lispers are arrogant smug weenies.

I find this almost unbearably sad.


ASDF-Install 0.5.5
Wednesday, June 14, 2006

Version 0.5.5 of ASDF-Install is ASDF-Installable and also available in the usual Darcs and tarball formats. This version starts to rationalize some of the where to look for system files registry management (i.e., there is now a function asdf-install:add-registry-location). It also adds a new restart so that packages can be installed even if there is no GPG signature file available (e.g., trivial-gray-streams).

See the change log for all the gory details)


two quick notes
Friday, June 9, 2006

I bumped up ASDF-Install to 0.5.4. The changes are very minor and serve only to make processing putative future keyword arguments a bit more consistent.

I also found a bug in system-check. At one point I had remembered to wrap my HTTP POST request in a with-standard-io-syntax but then I ran into some confusion with SBCL and I think pretty printing... To be honest, I never figured it out (slinks away with shame faced admission of guilt...). In any case, without the with-standard-io-syntax, it was quite possible for your personal print settings to muck things up badly (e.g., I have *print-length* set to 10 and the server didn't know what to do with an ellipsis). So, if you have already downloaded system-check, you might want to do so again. If you haven't, why, then, you should -- how is that supposed to be punctuated anyway? It makes sense in a conversation but I'm not sure how to write it... Anyway, please do check out system-check and let me know how to make it suit your needs.


Why?
Friday, June 9, 2006

Joel Reymont is leaning toward Ruby on Rails over Lisp for web application development and I don't get it. Joel is very smart and on top of things and I figure that he's right: Lisp just isn't there yet when it comes to building web applications quickly and easily unless you want to make it half your own private research project.

But Why? Lisp is good. Lisp excels at Domain Specific Languages. Lisp has the REPL for goodness sake. It can't just be that there are many Lisp dialects / implementations and that it's a bit hard to get, say, a socket library that works on all of them... can it? I mean, the MediaWiki runs on PHP (I think) and everything I've seen or heard about PHP is "...ick...". So why does the MediaWiki have all the loving goodness and we have the CLiki? (This isn't to diss the CLiki by the way, it's cool; I like it but as wikis go there isn't much there and nothing much has happened to it for ages.)

So bloggers on Planet Lisp, start your engines please... I'd like several essays from people who know the skinny on web dev and Lisp. Is Joel wrong? Is there a missing secret ingredient? what does Lisp need to come from behind in the web application development language race? After all, Ruby was a dark horse until Rails came along. The best essay gets a prize (to be announced later once I think of something).


That was fast
Friday, June 9, 2006

I bought a new hard drive for my laptop yesterday afternoon (the 8th) from PBParts.com. They promised that the drive would ship sometime before the 13th which seemed fine to me.

Guess what arrived today via FedEx. That's right, I've got the little thing already. It's always good to lower expectations but I'm very impressed.

It looks like a hardware kind of night tonight.


ASDF Checker report
Thursday, June 8, 2006

Here's the first ASDF Checker Summary report from Enterprise Lisp. The checker process runs daily and checks all of the known ASDF installable systems to see if they seem to work (i.e., that they download, have GPG signatures, that their system files seem to make sense and so on). The data from checker will be grist for tester once I finish refactoring ASDF-Install-tester to create it.

There is much room for improvement in the report (and feedback is welcome!). The last three columns show the system's status, whether we have a valid system file and whether or not we were able to create a system signature. The current system column isn't too useful; I need to spend a bit of time looking at the data and creating informative codes... A valid system is one whose system definition the checker could load -- it currently conveys no more information than whether or not the system has a status of error. Finally, the system signature is a list of the direct files of the systems and their universal times. Here are two examples:

? (dsc:system-signature 'moptilities)
(("moptilities.asd" . 3345750061) 
 ("dev:moptilities.lisp" . 3356703878))
? (dsc:system-signature 'anaphora)
(("anaphora.asd" . 3287421247)
 ("packages.lisp" . 3287421247)
 ("early.lisp" . 3287421247)
 ("symbolic.lisp" . 3287421247)
 ("anaphora.lisp" . 3287421247))
? 

These signatures are part of defsystem-compatibility. Checker needs to take more pains than it yet does to ensure that features and such don't influence signature computation. That's another one of the seemingly endless list of next steps!


On average I'm in the middle of a hot bed of Lisp activity
Wednesday, June 7, 2006

Unfortunately, the distribution is highly bimodal . My LinkedIn network shows that I have lots of connections in San Francisco and in England and Europe. Oh well, Lisp is catching on and eventually there is bound to more activity here. I wish that America had real high speed rail so that I could easily zoop down to NYC and get more involved with LispNYC.

If wishes were fishes... we'd all cast nets.


announce: system-check (beta)
Tuesday, June 6, 2006

Enterprise Lisp has been rumbling and grumbling slowly to life around me. Today I'm announcing the first beta of System-check, an asdf-upgrade like facility, that will eventually integrate with an improved ASDF-Install-tester and other cool things. The idea behind system-check is that it's easier to centralize the testing of Lisp libraries that it is to get all of the library developers to agree on versioning schemes and the like. Therefore Enterprise Lisp will regularly check on all ASDF installable systems and

  • make sure the system can be downloaded,
  • make sure that the GPG signature can be retrieved,
  • make sure that the signature file is valid,
  • make sure that the tar archive can be decompressed and
  • that the archive contains a valid system definition
  • that can be properly loaded into a lisp system

This will be coupled with an improved and distributed ASDF-Install-tester that can go on to make sure that systems actually do install on a variety of Lisps and environments.

System-Check is the client that you can run to see if your systems are up to date. Here is an example session (in OpenMCL). First, we load system-check. It displays instructions when it finishes loading:

? (asdf:oos 'asdf:load-op 'system-check)
; loading system definition from user-home:darcs;asdf-systems;system-check.asd.newest into #<Package "ASDF0">


----------------------------------------------------------------------
;; System-check helps keep your ASDF-Installable systems up to date 
;; by communicating with enterpriselisp.com via HTTP. Enterprise Lisp 
;; checks ASDF-Installable systems regularly to make sure that they 
;; work properly and install correctly.  
;; 
;; System-check has three main entry 
;; 
;; 1. update - checks all of your ASDF-Installed systems against the 
;; most recent available version and lets you select which ones to 
;; 
;; 2. gather - performs the same check as update but returns the 
;; results as a 
;; 
;; 3. check-system - performs a check on a single 
;; 
;; More information can be found in the documentation or at 
;; 
;; System-check can look for systems in several different 
;; 
;; * installed-systems (used by default) - returns a list of systems 
;; that seem to have been ASDF installed (see its documentation for 
;; 
;; * installable-systems - returns a list of all of the available 
;; systems (using 
;; 
;; Both of these functions take the keyword argument 
;; :only-asdf-installable?. If this is true, then System-check will 
;; query the CLiki to see if the system is ASDF installable.  
;; 

We cut to the chase and call the update function.

? (system-check:update)
Searching for systems.
; loading system definition from /Users/gwking/.asdf-install-dir/systems/cl-difflib.asd into #<Package "ASDF1">
; registering #<SYSTEM :CL-DIFFLIB #x84E1786> as CL-DIFFLIB
; loading system definition from /Users/gwking/.asdf-install-dir/systems/uffi.asd into #<Package "ASDF1">
; registering #<SYSTEM UFFI #x8514DE6> as UFFI

Checking 59 systems...........................................................

Results
============================================================
  4    we-are-latest  (local system has more recent changes than the remote)
------------------------------------------------------------
  xmls                          mk-defsystem                  
  cl-prevalence                 cl-html-diff                  


  6     both-changed  (system has modifications locally and remotely)
------------------------------------------------------------
  system-check                  metatilities                  
  lml2                          defsystem-compatibility       
  cl-graph                      cl-containers                 


  8            error  (the server was unable to check the system)
------------------------------------------------------------
  split-sequence                net-telent-date               
  ironclad                      clx                           
  cl-store                      asdf-install                  
  asdf-binary-locations         arnesi                        


 11      need-update  (remote system has changes)
------------------------------------------------------------
  trivial-http                  tinaa                         
  moptilities                   metacopy                      
  metabang-bind                 lift                          
  cl-variates                   cl-mathstats                  
  cl-html-parse                 asdf-system-connections       
  araneida                      


 30               ok  (local system is up to date)
------------------------------------------------------------
  xlunit                        wilbur-ext                    
  uffi                          trivial-configuration-parser  
  s-xml-rpc                     s-xml                         
  s-utils                       s-sysdeps                     
  s-http-server                 s-http-client                 
  s-base64                      rt                            
  puri                          md5                           
  lw-compat                     kmrcl                         
  html-encode                   diff                          
  contextl                      clsql                         
  closer-mop                    cl-utilities                  
  cl-fad                        cl-dot                        
  cl-difflib                    cl-base64                     
  cl-ajax                       cffi                          
  asdf-upgrade                  anaphora                      


Now that we've seen the report, we're asked what we want to update and ASDF-Install takes away the show.

We'll now ask which kinds of systems you want to update. 
You will be able to confirm before the update process 
begins.

11 systems need to be updated. Do you want to update them? (y or n)  y

Marking these systems for update: araneida, asdf-system-connections, cl-html-parse, cl-mathstats, cl-variates, lift, metabang-bind, metacopy, moptilities, tinaa, trivial-http.

6 systems are changed locally and remotely. Do you want to update them? (y or n)  n


Updating 11 systems
Install where?
0) System-wide install: 
   System in /Users/gwking/.asdf-install-dir/systems/site-systems/
   Files in /Users/gwking/.asdf-install-dir/systems/site/ 
1) Personal installation: 
   System in /Users/gwking/.asdf-install-dir/systems/
   Files in /Users/gwking/.asdf-install-dir/site/ 
2) Abort installation.
 --> 1
;;; ASDF-INSTALL: Downloading 531636 bytes from http://common-lisp.net/project/araneida/release/araneida-latest.tar.gz to araneida.asdf-install-tmp ...


"gpg: Signature made Fri Dec  2 12:55:13 2005 EST using DSA key ID 5E55AFEB" 
"[GNUPG:] ERRSIG 19876FCE5E55AFEB 17 2 00 1133546113 9" 
"[GNUPG:] NO_PUBKEY 19876FCE5E55AFEB" 
"gpg: Can't check signature: public key not found" 
> Error in process listener(1): No key found for key id 0x#1=(19876FCE5E55AFEB 17 2 00 1133546113 9). Try some command like 
>                                 gpg  --recv-keys 0x#1#
> While executing: #<Anonymous Function #x8426CE6>
> Type :POP to abort.
Type :? for other options.
1 > :r
0. Return to break level 1.
1. #<RESTART ABORT-BREAK #x2947FE>
2. Don't check GPG signature for this package
3. Retry GPG check (e.g., after downloading the key)
4. Return to toplevel.
5. #<RESTART ABORT-BREAK #x294CBE>
6. Reset this process
7. Kill this process
1 > (:c 2)
Invoking restart: Don't check GPG signature for this package
;;; ASDF-INSTALL: Installing araneida.asdf-install-tmp in /Users/gwking/.asdf-install-dir/site/, /Users/gwking/.asdf-install-dir/systems/
araneida-version-0.90.1/
;;; and so on...

We can also check the status of a single system. I'll turn on the verbose mode so that you can see what gets passed back and forth.

? (system-check:check-system 'anaphora :verbose? t)
Signature: (:SYSTEM :ANAPHORA :SIGNATURE (("anaphora.asd" . 3287421247)
 ("packages.lisp" . 3287421247) ("early.lisp" . 3287421247) ("symbolic.lisp" . 3287421247) 
("anaphora.lisp" . 3287421247)) 
:FEATURES (:KMR-MOP :CLX-ANSI-COMMON-LISP :ARANEIDA-THREADS :CL-FAD :CLOSER-MOP :ASDF-INSTALL :ASDF :GWKING :PRIMARY-CLASSES :CCL :CCL-2 
:CCL-3 :CCL-4 :CORAL :COMMON-LISP :MCL :OPENMCL :ANSI-CL :PROCESSES :UNIX :OPENMCL-NATIVE-THREADS :OPENMCL-PARTIAL-MOP 
:MCL-COMMON-MOP-SUBSET :OPENMCL-MOP-2 :POWERPC :PPC-TARGET :PPC-CLOS :PPC32-TARGET 
:PPC32-HOST :DARWINPPC-TARGET :DARWINPPC-HOST :DARWIN 
:POWEROPEN-TARGET :32-BIT-TARGET :32-BIT-HOST :BIG-ENDIAN-TARGET 
:BIG-ENDIAN-HOST :OPENMCL-PRIVATE-HASH-TABLES) 
:IMPLEMENTATION "openmcl-1.0-darwin-powerpc" 
:VERSION "0.1" 
:PATHNAME-SEPARATOR "/")
  Response: 200
  Headers: ((:DATE . "Wed, 07 Jun 2006 16:24:23 GMT") (:SERVER . "Apache/1.3.33 (Darwin) mod_lisp/2.43") 
(:SIGNATURE-RESULT . "NIL") (:POSTED-CONTENT . "(:SYSTEM :ANAPHORA :SIGNATURE ((\"anaphora.asd\" . 3287421247) (\"packages.lisp\" . 3287421247) (\"early.lisp\" . 3287421247) (\"symbolic.lisp\" . 3287421247) (\"anaphora.lisp\" . 3287421247)) :FEATURES (:KMR-MOP :CLX-ANSI-COMMON-LISP :ARANEIDA-THREADS :CL-FAD :CLOSER-MOP :ASDF-INSTALL :ASDF :GWKING :PRIMARY-CLASSES :CCL :CCL-2 :CCL-3 :CCL-4 :CORAL :COMMON-LISP :MCL :OPENMCL :ANSI-CL :PROCESSES :UNIX :OPENMCL-NATIVE-THREADS :OPENMCL-PARTIAL-MOP :MCL-COMMON-MOP-SUBSET :OPENMCL-MOP-2 :POWERPC :PPC-TARGET :PPC-CLOS :PPC32-TARGET :PPC32-HOST :DARWINPPC-TARGET :DARWINPPC-HOST :DARWIN :POWEROPEN-TARGET :32-BIT-TARGET :32-BIT-HOST :BIG-ENDIAN-TARGET :BIG-ENDIAN-HOST :OPENMCL-PRIVATE-HASH-TABLES) :IMPLEMENTATION \"openmcl-1.0-darwin-powerpc\" :VERSION \"0.1\" :PATHNAME-SEPARATOR \"/\")") 
(:CONTENT-LENGTH . "772") (:REMOTE-IP-ADDR . "155.212.227.170") (:REMOTE-IP-PORT . "30528") (:SCRIPT-FILENAME . "/Library/WebServer/Documents/compare.lsp") (:SERVER-IP-ADDR . "10.0.1.2") (:SERVER-IP-PORT . "80") (:SERVER-PROTOCOL . "HTTP/1.0") (:METHOD . "POST") 
(:URL . "http://metabang.gotdns.com/compare.lsp") (:SERVER-ID . "metabang") (:SERVER-BASEVERSION . "Apache/1.3.28") (:MODLISP-VERSION . "2.43") (:HOST . "metabang.gotdns.com") (:USER-AGENT . "simple HTTP for Common Lisp")
 (:CONNECTION . "close") (:CONTENT-TYPE . "x-www-form-urlencoded"))
:OK
?

System-check and the Enterprise Lisp system checker are both in beta at the moment. I've tested on several lisps but expect that problems and edge cases remain. Please let me know if you try system-check and it fails. I'm also working on some improvements to ASDF-Install so that it can behave more reasonably when installing multiple systems and with improving the back and forth between the client and the server. Enterprise Lisp is all about making Lisp easier for everyone, so please let me know if anything seems awry.


announce: simple-http
Tuesday, June 6, 2006

The world didn't need another HTTP library. Unfortunately, I did. Thus is born Simple-HTTP. It builds on the base of Trivial-HTTP and adds things like HTTP Head, HTTP download and HTTP-resolve. Eventually, it will be pulling in some useful bits and pieces from Lemonodor and Lisp Paste (e.g., here and here). Those that know should tell me what else would be worth adding. Enjoy.


announce: improved CL-Markdown
Monday, June 5, 2006

These changes aren't nearly as cool as ABL's but CL-Markdown has taken another few small steps forward toward Markdown compliance. The latest code tweaks the block structure processing and improves paragraph recognition logic. This means that its output looks more like that produced with the real markdown.


announce: improved asdf-binary-locations
Monday, June 5, 2006

Thanks to a patch from Peter Seibel and several good ideas and hints from Robert Goldman, ASDF-Binary-Locations has some changes and improvements. The most significant change is that the variable *system-configuration-paths* has been renamed to source-to-target-mappings* because the latter name is, IMHO, much, much better than the former. There are also two new variables to control behavior and the innards have been rewritten with generic functions so that you can have fine control over where exactly things go at both the system, the operation and the component level.

The three control variables are:

  • *centralize-lisp-binaries* - If true, compiled lisp files without an explicit mapping (see *source-to-target-mappings*) will be placed in subdirectories of *default-toplevel-directory*. If false, then compiled lisp files without an explicitly mapping will be placed in subdirectories of their sources.
  • *default-toplevel-directory* - If *centralize-lisp-binaries* is true, then compiled lisp files without an explicit mapping \(see *source-to-target-mappings*\) will be placed in subdirectories of *default-toplevel-directory*.
  • *source-to-target-mappings* - The *source-to-target-mappings* variable specifies mappings from source to target. If the target is nil, then it means to not map the source to anything. I.e., to leave it as is. This has the effect of turning off ASDF-Binary-Locations for the given source directory.

Note that if you've already set *system-configuration-paths* (e.g., in you lisp startup file), then ASDF-Binary-Locations will warn you about the change and automagically set *source-to-target-mappings* to whatever value you gave *system-configuration-paths*.

Also note that the finer control mentioned about has not been extensively tested (i.e., it really hasn't been tested at all) yet but will be once I finish the test suite. Won't that be sweet!

Please let me know if your milage varies.


more identity theft on the way...
Sunday, June 4, 2006

The names and credit-card numbers of 243,000 Hotels.com customers were on a laptop computer stolen from an employee of accounting firm Ernst & Young, according to sources familiar with the matter.

is it standard practice to provide auditors with names and credit care numbers? Why would they need that information? Why aren't they given anonymized data? If "The security and confidentiality of our client information is of critical importance to Ernst & Young", then why don't they do more to take it seriously?


the Golem's Eye
Wednesday, May 31, 2006

Book two of the Bartimaeus trilogy finds the magician Nicholas and the Djinn Bartimaeus once again embroiled in the scheming power politics of an Europe where magic rules. Hidden forces further devious plots: the resistance returns and is betrayed from within; a Golem is destroying the city; betrayal and duplicity abound. In short, it's a lot of fun and worth the enjoyable read. Yes, it is a book for young adults (hey, I'm only 42 (and that, as I suddenly recall, is the answer to everything)) but you'll probably enjoy it too.


the Amulet of Samarkand
Wednesday, May 31, 2006

One of the pleasures of having kids is that you can read kids book without feeling that you're too old to be reading them. The Amulet of Samarkand is very much a kids book but its a good one and it comes with interstitial themes worthy of adults. The book tells the tale of an earth where magic works thanks to the ability of some humans to summon spirits of all kinds. It is part one of the three part Bartimaeus trilogy (the eponymous name comes from that of the main narrator, a djinn of remarkable ability (including much sarcasm and wit). Amulet follows a complex plot with a young hero/anti-hero, many magicians of dubious intent, political scheming and many levels, a anti-magical resistance and enough twists to confuse an adder. It is all painted with a broad brush full of vim, verve and vitality. Buried under all this enjoyable fluff, however, are the messages that power is often neither to be trusted, nor desired; that the beneficence of a government is often inversely proportional to how loudly it congratulates itself; and that (to quote Bruce Cockburn) "when you get down to the bottom, loves the only thing that matters."


It's probably because I'm tired
Tuesday, May 30, 2006

and it's unlikely you really want to know. Nonetheless, here is my response to the iDon't marketing campaign of sandisk

Yes. We are all individuals. But good taste means buying high quality, extremely usable music players that work circles around the competition. This "oh so hip" concealed ad of yours is tacky, tasteless and offensive. iDo iListen to my iPod. iDon't listen, buy or even want to see a sandisk knock off. Get a grip. Get a life. If you want to sell music players, make a good one and stop insulting success.

ok, no more venting until at least tomorrow... (hmm, that gives me 7-minutes).


CLSQL and me: I feel so Microsoft Access ugly
Tuesday, May 30, 2006

I did a bunch of database stuff back when SQL 92 was exciting. I used early PC database systems like dBase IV, Foxpro, Borland's Paradox, and Microsoft's Access. Since auto-increment columns hadn't reached down to those trenches, I ended up doing the old "keep track of the maximum key in a separate key table yourself" trick. Not fun, but effective -- well, it works.

Today, I was messing with CLSQL (connecting to SQLite) and felt stymied trying to correctly get my primary keys to work. In the hopes that a wiser soul will feel my pain, here is what I did.

(def-view-class primary-key-mixin ()
  ((id :db-kind :base :type integer
       :db-constraints (:primary-key)
       :reader id :initarg :id)
   (table-name :db-kind :virtual 
	       :reader table-name
	       :initarg :table-name)))

(defmethod initialize-instance :after ((instance primary-key-mixin) &key)
  (unless (and (slot-boundp instance 'id) (id instance))
    (setf (slot-value instance 'id) (find-next-id (table-name instance)))))
					
(def-view-class sample-table (primary-key-mixin)
  ((name :db-kind :base :type (varchar 40)
	 :db-constraints (:unique :not-null)
	 :accessor name :initarg :name))
  (:default-initargs
      :table-name "managed-system"))

(def-view-class primary-key ()
  ((table-name :db-kind :base 
	       :db-constraints :primary-key
	       :type (string 20)
	       :accessor table-name
	       :initarg :table-name)
   (max-key :db-kind :base :type integer
	    :accessor max-key
	    :initarg :max-key
	    :initform 0)))

(defun recreate-tables (&key really?)
  (unless really?
    (cerror "Yes, really!" "Do you really want to trash the tables and start fresh?"))
  (clsql:drop-table [primary-key] :if-does-not-exist :ignore)
  (create-view-from-class 'primary-key)
  (clsql:drop-table [sample-table] :if-does-not-exist :ignore)
  (create-view-from-class 'sample-table)
  )

(defun find-next-id (table-name)
  (with-transaction nil
    (bind ((exists?
	    (select [max-key] 
		    :from [primary-key]
		    :where [= [table-name] table-name]
		    :flatp t))
           (next-key (if exists? (1+ (first exists?)) 0)))
      (if exists?
        (update-records [primary-key]
			:av-pairs `(([max-key] ,next-key))
			:where [= [table-name] table-name])
        (insert-records
         :into [primary-key]
         :av-pairs `(([table-name] ,table-name) ([max-key] ,next-key))))
      (values next-key))))

This defines two view-classes (and the recreate-tables function makes tables out them). The primary-key table keeps track of the highest key assigned so far; the primary-key-mixin uses it to assign keys as necessary. Since instances can be created and not added to the database, it's quite likely that we'll have gaps but that's not a big deal. This let's me execute code like:

? (setf *s* (make-instance 'sample-table :name "Gary"))
#<MANAGED-SYSTEM #x88C5F8E>
? (update-records-from-instance *s*)
; no value
? (setf *s* (make-instance 'sample-table :name "Wendy"))
#<MANAGED-SYSTEM #x88C5F9A>
? (update-records-from-instance *s*)
; no value
? (select [*] :from [managed-system])
(("cl-containers" 0) ("moptilities" 1))
("NAME" "ID")

Which, while not really exciting, is at least moderately painless. Aside from the fact that doing all of this key management myself strikes me as unbearably last decade (not to mention error prone and probably non-union), I figure that there must be a better way.

Any suggestions?


asdf:test-op redux
Tuesday, May 30, 2006

I wrote about asdf:test-op a bit ago and have since modified many of my opinions as to the way to do it. Yesterday, I noticed that Greg Pfeil had come to exactly my new conclusions back in late March (I read planet lisp quasi-religiously but missed this somehow). The chief changes are to move most everything to the system definition (not the test system definition) so that

(asdf:oos 'asdf:test-op 'my-system)

will test my-system (makes sense!). The only bit Greg leaves out is

(defmethod operation-done-p 
           ((o test-op)
            (c (eql (find-system 'moptilities))))
  (values nil))

Which keeps ASDF thinking that your test operation hasn't been done. (As we all know, testing is never done). Thanks Greg.


Comments?
Sunday, May 28, 2006

I've been meaning (and hoping) to write my own simple comment engine for the last upteen months. Today, I broke down and signed up with Haloscan. I'm not sure how customizable it can be or how I'll like it but I'm willing to see what happens.


Daring fireball
Sunday, May 28, 2006

I like John Gruber.

Can anyone explain how this seven-tiered edition plan is good for anyone other than the managers within Microsoft's bureaucracy? Microsoft is turning into a company that values management decisions that increase complexity over design decisions that increase clarity.

I love simplicity. I don't want to fight with my machine or with my inner mustard chooser. Too often, too much is not more than enough, it's too damn much.


Bellwether
Sunday, May 28, 2006

When I mentioned fads last week, Bill Clementson was kind enough to recommend me two books: bellwether by Connie Willis and Pattern Recognition by William Gibson. I've read quite a bit of Gibson's work over the years -- dark, but interesting -- but had never even heard of Willis. After reading bellwether, I'm very happy to have finally been introduced.

Willis is a delightful writer whose characters speak with the sort of ironic detachment of the modern person and yet still remain fully human and approachable (in this, she reminds me of Walker Percy). Her subject in bellwether, appropriately enough, is fads and trends and why it is that they ebb and flow across the human condition subject to a tidal pull all their own. The book offers an answer (though I don't think Willis believes it completely): that genius arises out of chaos, as a sort of self-organized criticality that forms because anything else would cause total system collapse; and that trends are both part of this self-organization and also the result of human bellwethers who are "a little faster, a little more greedy". Bellwethers lead without leading (though not in the Taoist sense <smile>) and move at least partly to their own deep beat... pulling at least some of the rest of us in their wake.

Regardless of its sociological value, however, this book is a wonderful read. Highly recommended.


Slurp
Saturday, May 27, 2006

Slurping up files is one of Perl's strengths and I've always assumed that Lisp could not do as well. I was wrong. This slurping in Lisp page demonstrates how at least some Lisps can do Perl one better. I found it via Stefan School's blog. Go Lisp, go!


brief note on CL-Graph
Friday, May 26, 2006

Someone rightly mentioned to me that CL-Graph doesn't have much in the way of a high level overview. Tinaa documentation isn't bad for seeing the trees but it doesn't make the forest any easier to navigate. Here, then is a very brief snapshot of CL-Graph from on high:

Structurally, a graph is a container-uses-nodes-mixin. This means that the things you put in the container are wrapped up in some other structure (a node). Examples of containers that use nodes are graph-container, binary-search-tree, heap-container. Examples of containers without nodes are list-container, array-container, basic-queue and so forth. In practice, this means that when you add a thing to a graph, the thing gets wrapped up in a vertex structure. When you do a find-vertex, you get back the vertex structure (not the element you added). For example:

? (add-vertex *graph* 23)
#<23>
:new        ; tells you that this was a new vertex
? (find-vertex *graph* 23)
#<23>     ; the vertex
? (describe *)
#<23>
Class: #<STANDARD-CLASS GRAPH-CONTAINER-VERTEX>
Wrapper: #<CCL::CLASS-WRAPPER GRAPH-CONTAINER-VERTEX #x8686356>
Instance slots
ELEMENT: 23
DEPTH-LEVEL: 0
VERTEX-ID: 0
TAG: NIL
GRAPH: #<GRAPH-CONTAINER [1,0] #x8E0EBD6>
COLOR: NIL
RANK: NIL
PREVIOUS-NODE: NIL
NEXT-NODE: NIL
DISCOVERY-TIME: -1
FINISH-TIME: -1
VERTEX-EDGES: #<VECTOR-CONTAINER 0 #x8E0B3EE>
; No value
? (element (find-vertex *graph* 23)) => 23
23

The main functions for creating and querying graphs are add-vertex / find-vertex / delete-vertex and add-edge-between-vertexes / find-edge-between-vertexes and delete-edge-between-vertexes. Once you have a graph or a vertex, you can map a function over its elements / edges using iterate-elements or iterate-edges. If you want to actually map over the vertexes, you can use iterate-nodes. There are also lots of iteration / collection functions for dealing with the children and parents of vertexes in directed graphs.

Quite a lot of CL-Graph is bound up with different ways of querying, collecting and iterating over the graph structures. It also has some simple graph algorithms (mostly of the summary sort like clustering coefficient, vertex degree, etc.) and it has decent export to DOT and integration with GraphViz. It's main claim to fame is that it does a good deal of bookkeeping to keep track of vertexes and edges so that you can concentrate on using the graph to do your stuff.


Notes on a Lisp Library Management
Thursday, May 25, 2006

I'm continuing to muddle through various ideas for Enterprise Lisp. One thing I think that the Lisp community needs is better library management. This means:

  • System definition
  • New System installation
  • System maintenance

ASDF (and MK-Defsystem and a few other defsystems) serve the system definition role admirably. My own defsystem-compatibiliy aims to make it easier to use multiple defsystems simultaneously (*). ASDF-Install does a great job grabbing new systems. The missing piece is system maintenance: making sure you have the latest of every and that it all works together. What follows is my proposal for improving the situation.

Desiderata

  • System maintainers want to do a good job but managing version numbers is difficult, time consuming and often goes undone.
  • System consumers just want things to work
  • If the process that handles system maintenance runs on the client, then you add another layer of things that must be maintained. Therefore, put as much computation on the server. (**)

The view from above

The system maintainer consists of the following processes:

  • Checker - monitor known systems for changes. When changes occur, generate tickets for Tester.
  • Tester - Test system installation (similar to ASDF-Install-Tester). Note that Tester is distributed across different machine architectures and Lisps.
  • Reporter - Use the results from Checker and Tester to produce pretty pictures, update RSS feeds, send text messages, generate press releases, call out the national guard, etc.
  • Clients (that's us!) - communicate with Webber to see if systems need to be updated
  • Webber - Communicate with clients (generally ASDF-Install) to determine if systems need to be updated.

Questions and Answers

This is the section where careful readers get to catch my errors and e-mail me as to how to make things better... So pay attention.

Q: What do you mean by known systems? A: Initially, Checker will work with ASDF-Installable systems. The main point of this, however, is to let Checker know where a system is to be found so there is no reason that other systems could not be registered for the service.

Q: How does checker know when a system has changed? A: Checker does two periodic checks on known systems. First, it compares the :last-modified date of an HTTP head request. If the last-modified date of the systems tarball is greater than the date Checker saw before, then the system may have changed. Secondly, Checker uses the system definition to build a system signature: a list of system files and file-write dates. Checker can compare the signature it has with the new signature to see if files have been added or removed and to see if file dates have changed. Note that occasional false positives are OK.

Q: What about *features*? Won't they mess with this signature you're talking about? A: Yes, features are a problem. For the curious, there are 52 systems on the Cliki that contain features in their system defs (out of about 250). These systems contain 40 different features. Most of them are operating system or Lisp implementation related. A few are more specialized. I propose to get around the features problem by using a custom reader to grab every file in a system definition.

Conclusion

The system described above is being actively developed and I hope to have something beta-testable sometime this work (oh, oh, that sounds like a promise. I have to stop writing now and get back to coding). Please let me hear from you if I've missed something obvious or subtle.


(*) To be fair, defsystem-compatibility (DSC) currently only supports ASDF and EKSL's system definer: Generic Load Utilities. However, only time and lack of clamor prevents DSC from working with others.

(**) Yes, this gives a central point of failure but web server and hardware is up to the task.


On the evolution of libraries
Wednesday, May 24, 2006

Lately, I've been pondering on the problems of library evolution. At one extreme of library creation is the put-it-all-in-here. At the other is the carve-nature-at-its-joints and have lots of dependencies. Philosophically, I tend towards the latter because shared code sucks (do it once, do it right) but it leaves one with an icky library management problem.

Then there's the issue of what to do with patches and additions? If a maintainer exists, patches and additions can pass through them. If there isn't a maintainer or if they find the changes don't fit with their vision of the library, then what happens? One can create a fork -- but that adds another library that may have issues working with the pre-fork version and it raises the code duplication issue in spades. One can also create another library that depends on the original... now we just have the yet-another-library problem.

It's interesting that we're culturally willing to create websites as wikis and are finding ways to deal with the problems of attribution and controversy. What would happen if we had a code wiki where everyone could edit everything. It's a scary thought because it takes so little to make code fail and because a little code can do a lot of damage very quickly (the software fault travels around the world before the patch gets out of bed or something...). Rambling on, this makes me think of the difference between syntax and semantics: word wikis work pretty well because they use language to pass around semantic knowledge and humans are remarkably good at dealing with faults. Code webs deal only in syntax and computers are almost unbearably brittle. The magic of computing, as my PhD advisor liked to say, is that it performs semantic transformations using only syntax. The problem is that the semantics all comes from the programmer. So what sort of infrastructures would make a code wiki possible?


Ora Lassila on the semantic web
Tuesday, May 23, 2006

Ora Lassila, a research fellow at Nokia, gave an interesting sounding talk (pdf) last week to the W3C advisory committee (I wasn't there, I'm just name dropping - smirk). His slides have include these provocative bullets:

  • Any specific problem (typically) has a specific solution that does not require Semantic Web technologies
  • Question: Why then is the Semantic Web so attractive? Answer: For future-proofing.

I think that this sounds about right and is part of the reason for the general attractiveness of XML and Lisp : code is data is code. When we need to, we can grovel over our source easily and do cool stuff.


Social Phenomenom (er, Phenomenon)
Tuesday, May 23, 2006

I've been looking at Ruby quite recently while working on Montezuma with John Wiseman on a port of Ferret (which is a ruby port of the java Lucene text indexing engine). I don't see anything particularly special about Ruby; overall, it seems like another reinvention of the wheel with more syntax with which to be confused! That said, all these "scripting plus" languages do fit a niche that Lisp has not been able to play in because Lisp has too much baggage (in my holy opinion) and because Lisp qua Lisp is missing the batteries like sockets, web services, etc...

What I find most interesting, however, is the social phenomenon: why and how did Ruby and Python make it to the big time? Why did Harry Potter become such a hit? Ruby, Python and Harry are all good but none of them seem, to me, markedly better than their competitors... Who researches this kind of stuff? Are there papers out there that claim to explain what is going on? Tipping Point? Wisdom of Crowds. Where is Malcom Gladwell when you need him?


the Problem with Threads
Tuesday, May 23, 2006

Someone mentioned Edward A. Lee's The Problems with Threads paper over at Lambda the Ultimate and I just finished reading it. The argument is simple: people live in a concurrent world so concurrent programming shouldn't be all that hard. But concurrent programming with threads is very hard so what's the problem? The problem is that threads are strongly non-deterministic and programmers (and languages and frameworks) must make great efforts to recreate determinism. It would be better, thinks Lee, to keep everything as deterministic as possible and only add non-determinism when necessary. I.e., we need better abstractions (perhaps built on top of threads).

Lee goes on to mention many of the alternatives and suggests that they have not taken deep roots in part because they are not necessary (yet -- there just aren't enough massively parallel non-trivial systems in use), because sequential programming is at the heart of all mainstream languages (and most non-mainstream ones too), and because it's hard to do multi-language programming (tool support, etc). The message, he says, is clear: "we should not [aim to] replace existing languages." He suggest coordination languages as a different extension mechanism (think Aspects and weavers) but even those have a hard time taking root because "language wars are religious wars and few of these religions are polytheistic" (I love that quote). He cites work with graphical notations and draws the parallel to UML's ability to abstract above specific language syntax and allow multi-languge use. There is, he hopes, hope.

This is a very readable paper with great examples and stories about had hard (thread based) parallelism can be and is. I don't know if his answers are correct (but who am I to know?! <smile>) but I strongly agree that we do need better models of computation and better languages to support those models. This should, perhaps, give Lispers hope. After all, what language to you know that is better positioned to support malleable syntax and coordination language experimentation?


Apologies: it was my fault, not CL-Markdown's
Tuesday, May 23, 2006

I meant &lt;pre&gt;. Depressing.


CL-Markdown update
Monday, May 22, 2006

I've removed CL-Markdown's dependency on LML2 (though CL-Markdown-Test still uses it to generate the comparison reports). I've also fixed several small tickets. The most important one probably being the correct handling of line breaks with <pre>s sections. I also changed the signature of the markdown form. The new one looks like:

Convert source into a markdown document object and optionally render it to stream using format. Source can be either a string or a pathname or a stream. Stream is like the stream argument in format; it can be a pathname or t (short for *standard-output*) or nil (which will place the output into a string). Format can be :html or :none. In the latter case, no output will be generated.

The markdown command returns (as multiple values) the generated document object and any return value from the rendering (e.g., the string produced when the stream is nil).

I hope that's clear. It makes it easy to go from strings or files to strings or files in any supported format (i.e., in HTML ).

There is still some distance to go before the basics are complete but things are starting to look pretty good.


Proving once again that I can do math, but not arithmatic...
Thursday, May 18, 2006

Michael Price wins the prize for quickly informing me that the version number of ASDF-Install on the website was 0.5.1, rather than the 0.5.2 that I had claimed. This should now be fixed. I should also mention that ASDF-Install has a new mailing list all to itself. See asdf-install-devel for details.

Finally, I'm in the process of bringing the tutorial back up to date.


ASDF-Install update
Thursday, May 18, 2006

The version now on Common-Lisp.net has been radically restructured and also has several patches and improvements. The restructuring just pulls various related forms out of installer.lisp and into their own homes. Mostly, this was to help me organize and maintain it. The improvements include:

  • More restarts involving the GPG key verification process so that, for example, you can switch to another process, retrieve a key and then try again.
  • ASDF-Install now prints its version string when it is first loaded (it's at 0.5.2).
  • I've tried to simplify the #+ / #- madness. There is still a ways to go for this to be complete.
  • Note that ASDF-Install now only installs the packages you request and their required dependencies. Earlier versions would install the package associated with every system definition that it downloaded.
  • There is a new keyword argument for the install command. If you specify :propagate t, then install will try to get the latest version of every package required during the installation. If propagate is nil (the default and previous behavior), then ASDF will only download the requested package and any that you do not yet have. It will not download any packages that you have already installed. Note that ASDF-Install still isn't doing any useful version checking, but being able to ask for everything fresh seems like a useful stopgap measure.

Please let me know if anything goes astray.


Software status update
Thursday, May 18, 2006

  • CL-Markdown : now works with SBCL, OpenMCL, and Allegro (alisp only at this point). Please post tickets to the Trac.
  • Trivial-Shell now exists (with thanks to Kevin Rosenberg and Alexander Repenning. Current interface is very trivial, some more stuff coming soonish.
  • ASDF-Install has moved to Common-Lisp.net. The version there has been restructured and has several improvements. It has a Trac too.

Doing it like we've always done
Sunday, May 14, 2006

A nice essay on the dangers of resting on our mental laurels

In the practice of security we have accumulated a number of "rules of thumb" that many people accept without careful consideration. Some of these get included in policies, and thus may get propagated to environments they were not meant to address. It is also the case that as technology changes, the underlying (and unstated) assumptions underlying these bits of conventional wisdom also change. The result is a stale policy that may no longer be effective…or possibly even dangerous.

...

This is DESPITE the fact that any reasonable analysis shows that a monthly password change has little or no end impact on improving security! It is a "best practice" based on experience 30 years ago with non-networked mainframes in a DoD environment — hardly a match for today's systems, especially in academia!


Enterprise Lisp Wiki
Saturday, May 13, 2006

I just created a simple wiki for Enterprise Lisp on Infogami (yes, they are the same people that did Reddit. So. <smile>). You'll need to join in order to edit. Let me know if that's a problem for you. I'm happy to post changes and ideas from other people if they send me an e-mail.


Drew McDermott and anti-literacy
Saturday, May 13, 2006

Drew McDermott has a nice essay on the benefits and difficulties of literate programming. The bit that resonates with me the most is this paragraph:

During program development, I tend to build a partial solution to a problem, then realize it's wrong and discard it or turn it inside out. It's very hard to force yourself to write a bunch of prose during this process; not only is the writing mostly wasted, it slows down your thought processes.

except that I find this happening to me about three-quarters of the time that I try test-first development. I don't know exactly what I'm doing (cf. Paul Graham's wonderful introduction to On Lisp or ANSI Standard Lisp (I can't remember which one)) and writing tests often turns out to be silly. Perhaps I'm just not getting out or can't rid myself of other bad habits or perhaps test first isn't something that makes sense all of the time.


On Selfish Memes: Culture as complex adaptive system
Saturday, May 13, 2006

This is a paper (pdf) that hits all the buzzwords hard and then hits them some more. We've got evolution, complex adaptive systems, power laws, dynamical systems, fitness landscapes, memes, phemes, and all the rest. The goal is to use memetics (which sounds way too muchlike dianetics to make me comfortable!) to explain culture and society. The chief problem is that (in the papers own words)

culture is multi-layer in hierarchies of description object, parts constituting the higher level of description non-linearly and so on. The important note we have from works on conventional cultural analysis is that culture is developed in the ways of how cultural units influence each other.

...

As a tool of cultural analysis, we can see by now that meme is a representation of diffused cultural unit. It is shown that meme concerns diffusion of the perceived; that is why memetics are close to the discussion about epidemiology of rumors (Lynch, 1998). If a meme pass through someone's brain by the process of perception, there is a process of interpretation and adoption before it goes to the next diffusion. However, the interpretation and adoption is frequent giving different output to be diffused. This is what we can see from our analysis and the above computational experiment and become the micro-properties of memetic process.

(The writing style -- loosely speaking -- doesn't help)

The level of abstraction hides some much philosophical sleight of hand and wooly thinking that no amount of formalism, charts and graphs can save us.

... meme is the cultural unit that imitated as an abstraction and neurally-stored in the brain. Since it is an abstraction, we are not allowed to assume meme as smallest information unit in cultural evolution in general, but it is the smallest information we use on explaining any cultural evolution. Thus, meme can be a very small part of cultural objects (e.g.: note of music, the way use of shoe) and even the big part of culture (e.g.: nationalism, religion). In other words, meme is a matter of analytical tool on explaining culture and its dissemination, propagation, and in general, evolutionary process.

This is one of the oldest self-inflicted tricks in the book of bad simulation: insert that which is to be proved into the foundations of the simulation and be astonished when the simulation acts that way (note that I'm not suggesting bad faith on the practitioner of the paper -- it's just that natural stupidity is something that must be fought against constantly.

In closing, I should mention that this paper is one of the references for a United States Army SBIR Request for Proposal. The RFP is reasonable enough but if this paper is supposed to be guidence... I'm more frightened than ever.


If voting machines were houses
Friday, May 12, 2006

Nice imagery:

"In the other ones, we've been arguing about the security of the locks on the front door," Jones said. "Now we find that there's no back door. This is the kind of thing where if the states don't get out in front of the hackers, there's a real threat."

Free, fair, and trusted elections...


More on in-package
Friday, May 12, 2006

Zach Beane adds his two dollars with a very interesting and technically astute post on in-package.


Earthquake Weather
Friday, May 12, 2006

Tim Powers has a quirky imagination that loves to connect the un-connectable and distort the usual interpretation of reality. I read the Anubis Gate several years ago and loved it. For some reason, I didn't pick up any other Power's books until last week. Earthquake Weather mixes science, the occult and several thousand plot twists into a tale of the Fisher King, Dionysus, Psychiatry, Multiple Personality Disorder, Ghosts and love. It's not a book for the faint hearted; you'll have to stay awake to follow the contortions and add in an extra helping of suspended disbelief. It's also not a book that will leave you thinking you better understand the universe or yourself. It is, however, a playful romp though the imagination and a heck of a lot of fun.


Graphing your relations with Address Book, OpenMCL and CL-Graph
Friday, May 12, 2006

Apple's Address book provides a convenient system wide repository for contact information. Recent versions include the ability to connect contacts together by relations. For example:

  • the Father of Gary King is Stephen King (true but not the novelist)
  • the Assistant of Jane Smith is John Doe
  • the Spouse of Wendy Delaney is Gary King

One thing that Address book doesn't provide is a way of viewing these relationships. Today's task: build an application that reads the Address book database and produces a graph showing the relations.

My first thought was to do this entirely in Apple's Object Oriented Cocoa framework. It is, after all, the easiest and most supported way to work with Apple's code. A few things, however, stood in my way: Cocoa is a big framework and I'm still learning it; I'm not aware of any Cocoa graph manipulation library that is comparable to CL-Graph; and I'm not aware of any Cocoa graph layout mechanisms comparable to GraphViz. So my second thought was to dig up some old e-mails (and here) between Richard Cook and Gary Byers on the OpenMCL mailing list and use OpenMCL and its ObjectiveC Bridge. I also looked at Richard's code for his Address Book / Google Maps mashup (look towards the bottom of the page).

Step 1 : create the interface database that lets OpenMCL talk to the Address book framework.

Step 2: use the Address book interface and Lisp to build the graph and make a DOT file of it that GraphViz will like

Step 3: use GraphViz to make an image. Voila!

Now to go over the steps in a bit more detail.

Creating an Interface Database

The OpenMCL documentation (with special thanks to Dan Knapp) is very helpful and thorough. First one use FFIGEN to read the ObjectiveC framework and create FFI files and then one use the parse-standard-ffi-files in Lisp to create the CDB databases that OpenMCL likes. As long as you don't create the FFI files in one of your OpenMCL Lisp repositories and then try to create the CDBs in a different repository -- yes, I did that and it took me a long time to figure out why things weren't working! The error message was cryptic but it was a head banger once I figured it out. I did find that I needed to go through the AddressBook headers file by hand and do each one individually (but that may be due to user error on my part). My populate script is available if you're interested in seeing it.

Using the Address book from OpenMCL

Once again, most of the work had been done for me. I used the webkit example as a template along with advice from old e-mails on the OpenMCL mailing list. The final result is available. Once this file is in place (in the OpenMCL examples folder), all you need to do is (require 'addressbook).

The great thing about the ObjectiveC bridge in OpenMCL is that one can use Cocoa classes almost as easily as you can in ObjectiveC and XCode itself. Even better, you can develop interactively without that nasty edit/compile/run cycle. (Compilers have gotten faster but it's still an appreciable delay). Rather than go over the details here, I'll let the code do the talking.

Use GraphViz to layout the graph

and here it is:

(You'll probably want to see it in its own window because of the scaling). Here is that DOT file that created this. Even this little sample shows a bunch of problems

  • The relations don't have to be in the Address Book -- which, I suppose, is a good thing -- so it's easy to have inconsistencies
  • Similarly, adding a relation from A to B doesn't create the back relation from B to A
  • Finally, the fact that it's not all that easy to add relations in the Address book means that not many get added (at least, that's true in my case).

By the way, if anyone knows how to tell GraphViz not to draw the edges and labels on top of one another, I'd appreciate it if they would tell me!

(Note that I found and fixed a minor display bug in CL-Graph while working on this example. If you want the edge labels to appear, you'll need to update).


In which we learn more about in-package
Thursday, May 11, 2006

Back when I was learning Lisp, I typed (in-package :foo) or (in-package "FOO"). Then one day my mentor told me that in-package was a macro so just typing (in-package foo) was fine. Was he right?

If he was talking only about the Listener, then I’d say that he was. If, on the other hand, he had been talking about using the bare version of in-package in files that are compiled, he was not. Yesterday, a #lisp discussion revealed to me the error of my ways (or, at least, one error). It is correct that in-package is a macro and that the Lisp package machinery will find the correct package and place you in that package when it encounters (in-package foo). At issue is what the “foo” signifies and that depends on the current state of your Lisp when the in-package is reached. If you are in package bar then “foo” will signify bar::foo ; if you are in package goo then “foo” will signify goo:foo. If the symbol “foo” is not yet interned in the current package, then Lisp will create it and intern it there. All of this sounds reasonable enough so aside from namespace pollution, what is the problem?

Suppose you working on a system with multiple packages. Suppose that you are in package foo and working on a file named “qurp.lisp” in package bar and that the natural order of these packages is that package foo does not exist when this “qurp” is loaded into a fresh Lisp. Now imagine what happens if you recompile the system… well, “qurp” has been modified so it will get recompiled. The Lisp compiler will look at the (in-package bar) that occurs on the first line of the file and see (in-package foo::bar). Remember that foo is the current package and remember that all seems because the package foo exists. Everything, in fact, seems to go swimmingly.

But it hasn’t.

Because when you quit and restart Lisp and try to re-load that system on which you were working, Lisp will try to reload “qurp.fasl” and encounter the symbol foo::bar but the package foo will not have been defined yet (remember, qurp gets loaded before foo is defined) and you’ll get an error and (if you were me) wonder why that happens…

If you’ve followed me, then you’ll now know why it happens and what to do about it: the solution is never to quit Lisp. Ever. Actually, the solution is to always preface the package names in your in-packages with either a “:” or (better yet) a “#:”. The first form puts the package name into the keyword package and that is guaranteed to be available. The second refers to the symbol in no package at all (speaking slightly loosely) and will also never get you into trouble.

I hope that’s clear. I’d say more but I’ve got about 30 more systems on which I need to run this script:

find . -name "*.lisp" \ -and -not -path "*/asdf-systems/*" \ -and -not -path "*/_darcs/*" \ -and -not -path "*/tags/*" \ -exec perl -pi \ -e "s/\(in-package\s+([^:#\"][^)\"]*)/(in-package #:\1/gi" {} \; \ -exec perl -pi \ -e "s/\(in-package\s+\"([^\"]+)\"/(in-package #:\1/gi" {} \; \ -print

Smile!


Why don't apple mail messages have proxy icons
Wednesday, May 10, 2006

Including those little icons in the windows title bars would make organizing mailboxes a little easier...


Enterprise Lisp notes
Tuesday, May 9, 2006

Here are some of the problems I’d like to help fix with enterpriselisp.com:

  • It’s hard to know which libraries work on which Lisps
  • It’s hard to keep libraries up to date
  • It’s hard to deal with multiple library dependencies
  • It’s hard to know library provenance and quality

My plan for Enterprise Lisp is to make it a higher powered ASDF Install Tester and to integrate it with ASDF-Install so that projects like ASDF-Upgrade can be made more robust without requiring extra steps and hoops of library developers.

As I said before, Enterprise Lisp can't succeed with love (well, I didn't actually say that but I was thinking it. Honest) and lots of ideas. Lets collaborate.


W3C HTML Validation bookmarklet
Tuesday, May 9, 2006

This is hardly anything but I find it useful so it's likely that others will too.

Graphic version: (*)

Text version: Validate

To use either of these, just drag and drop the bookmark to your bookmark bar. Then when you want to validate a web page you're viewing, click the bookmark you created. I've tried this in Safari and FireFox but it should work in any modern browser.

(*) If you use this a lot you should probably copy to the image to your own site...)

(**) And don't try it on this page because it's not valid. hrmf. (!)


CL-Markdown trac setup
Tuesday, May 9, 2006

I have a trac setup now for CL-Markdown. Intrepid beta testers should be able to submit tickets.


CL-Markdown is alive
Tuesday, May 9, 2006

CL-Markdown arrives at Common-Lisp.net (though this reporter has to admit that it is slightly drunk and slurring its speech). Aside from remaining glitches, CL-Markdown's biggest weakness is probably the number of other systems on which it depends (see the documentation if you don't believe me!). Part of this is due to laziness on my part: I generate HTML by first creating LML code and then using Kevin Rosenberg's very nifty LML2 package. I've been developing CL-Markdown with MCL (yes, it still exists... though probably not for much longer. Sigh.) and OpenMCL under OS X. I'll test things out with other Lisps soon. If you like (or dislike) CL-Markdown, then please enjoy, kvetch, criticize, and send me job offers (smile).


More bad news for United States Computer Science
Monday, May 8, 2006

Industry support for research drops:

Industrial funding for R&D in science and engineering (S&E) fields at universities and colleges dropped 2.6% in FY2004, to $2.1 billion. This was the third year in a row that industry funding declined, having dropped 1.1% in 2003 and 1.5% in 2002. According to the NSF InfoBrief, "the industrial sector is the first source of academic R&D funding to show a multiyear decline since the survey began, in FY 1953.


Ironic
Monday, May 8, 2006

Why Arial is everywhere

When Microsoft made TrueType the standard font format for Windows 3.1, they opted to go with Arial rather than Helvetica, probably because it was cheaper and they knew most people wouldn't know (or even care about) the difference. Apple also standardized on TrueType at the same time, but went with Helvetica, not Arial, and paid Linotype's license fee. Of course, Windows 3.1 was a big hit. Thus, Arial is now everywhere, a side effect of Windows' success, born out of the desire to avoid paying license fees.

Funny that a company so obsessed with license fees managed to weasel out of paying this one...


Montezuma
Sunday, May 7, 2006

John Wiseman has been assiduously porting Ferret (Lucene ported to Ruby) to Common Lisp. The project is Montezuma. On Friday, he pasted this tantalizing tidbit.

I've been helping out a little porting some of the search code and it's exciting seeing it actually running on useful data. I think Montezuma is going to be big! Maybe we'll get our revenge. That makes no sense but the silly part of me was unable to resist.


CL-Markdown comparisons
Sunday, May 7, 2006

I've put up some web pages comparing the output between CL-Markdown and Markdown. The results aren't beautiful but most of the differences are at the level of glitches in my regular expressions so I'm pretty happy. As I mentioned before, I'm still not sure if my implementation strategy was a good one or not (and it's hard to be sure because most of my work occurred in the interstices of my life -- not the most effective development strategy) but I think I'm relatively happy with the results. I'll know for sure once I see how hard it is to fix the final glitches and how easy it is to add some more advanced (non-markdown) features.

I'll be setting up the web site, etc. soon. In the meantime, Markdown addicts can look at the Markdown wiki or at Levi Pearson's Common Lisp code.


CL-Markdown stumbles forward
Wednesday, May 3, 2006

I received two e-mails asking about CL-Markdown's status over the weekend. Aside from the Interactive Markdown Dingus (live site), I haven't done anything Markdown related since January. It's one of the many projects I want to finish but can't always find much time on which to work...

That said, I brushed off the code on Monday and tried to remember what my plans had been. I known that I started by looking at the Perl and Python sources but soon decided that it all seemed easier that that (famous last words, I know). I went with the following plan:

  • Read in the paragraphs of text while doing basic encoding (i.e., it is a header? Is it part of a list, etc.). This gives us a list of chunks.
  • Iterate over the chunks and handle various span related bits and pieces (i.e., looking for emphasis, links and the link).
  • Output the list with a spiffy recursive function that knows when to add depth to the tree being constructed.

The first two steps of this went quite well (though there is the added complication of stripping off the starts of lines depending on the current context. I.e., If we in a blockquote already, then we to pull off the initial '>'). The last step turned out to be surprisingly hard for me to quite grok and was the main reason development came to a halt in January.

Things still seemed harder than they should have -- I must not quite understand the problem! -- this time around, but after some thrashing, I developed the function I wanted and much is mostly well. I'm still not sure if my original theory (start and list and create the tree later was flawed or if I'm just being stupid about the list to tree conversion... Sometimes it's hard to tell.)

There are still a few features completely unimplemented (e.g., special character escaping) and lots and lots of glitches. I wrote a program that compares the output of regular markdown and CL-Markdown using John Wiseman's CL-HTML-Diff. I'll get some good and bad examples up later tonight.

I'll also be throwing the code up to Common-Lisp.net in the near future so that everyone can find ways to break it!


Bye bye (dumb) blackbird
Tuesday, May 2, 2006

(I've not read the study and think that you have to take these things with a big lump of rock salt, but)

Mankind can chalk up another lesson in humility: We're not the only species that can learn grammar.

...

But nine of [the] 11 starlings learned to spot [clauses] at least 90 percent of the time, identifying the utterances by pecking buttons in exchange for a food reward. This shows that there's no "single magic bullet" separating humans from animals, said UCSD cognitive scientist Jeffrey Elman, who was not involved in Gentner's study.

Cool!


No surprise
Sunday, April 30, 2006

Science Panel Report Says Physics in U.S. Faces Crisis

Why am I not surprised...


Science, Smience
Friday, April 28, 2006

It's political. it's local. It's depressing. It makes me furious.

As the Talking Heads said: facts don't do what I want them to. It should be my country's theme song.


Quick Review: Critters 3.0
Thursday, April 27, 2006

Critters is a music generation program that seems to combine really interesting ideas with a non-so-great implementation. The idea is to evolve the music you like using genetic algorithms. It supports OS X's Audio Units, and has more options that you can shake a ... I'm not a musician or even an audiophile (though I often like what I hear) and the program's wasn't obvious enough for me to figure out easily -- gotta lower those barriers to entry! -- so I can't give much of an evaluation of the finished results. Still, I really like the idea of combining computers for generation and human input / guidance for testing and quality. Perhaps the next version will hit the sweet spot.


One Mystery Solved! Steel Bank Studio
Wednesday, April 26, 2006

Thanks to the magical self correcting web, I now remember that I was looking for Steel Bank studio! I think I would have found it if I hadn't been completely thrown off by the hair trimmer.


Lisp testing
Wednesday, April 26, 2006

Lispers who know know that there are a lot of testing frameworks out there. They spring up like weeds! I was going to continue with this metaphor but having decided that it wouldn't bear fruit. Sorry. I hate bad puns. really.

In any case, Liam Healy mentions that he recently found lisp-unit and I thought I should mention the CL-Gardener's Test frameworks comparison page (though doing so will show that it hasn't changed since February... I've been busy!). Perhaps this inspire others to look at these frameworks and help extend the report.


Google Page Creator
Wednesday, April 26, 2006

I just spent five minutes looking at Google Page Creator today. Though it is very cool; I wouldn't want to use it for real work; too much clicking and (minimal) waiting. Still, look what I made. Tee hee.


RFID Essentials
Wednesday, April 26, 2006

RFID Essentials by Bill Glover and Himanshu Bhatt provides a high-level yet technically detailed overview of where Radio Frequency Identification (RFID) has been, is, and is going. I can't say that I enjoyed reading it but it covers a great deal of ground on both the technical (how does it work? how can you use it?), the business (what standards exist? how are companies using now?) and the computational (what are the algorithms? about what should you be concerned?) in a painless manner. If you need to know about RFID and want a book that lays it all out from soup to nuts and beginning to end, I don't know if you'll find a better one.


SBCL Professional - humor
Wednesday, April 26, 2006

I was trying to recall the SBCL project (google search) that was planning on packaging everything up nicely with support, etc. I didn't find that but I did find an interesting Lisp powered hair trimmer. Cool.


enterprise lisp
Monday, April 24, 2006

Roberta, the dual G4 that runs metabang.gotdns.com had a minor conniption earlier today and, in a testament to the credo of Computer Scientists everywhere, I decided that the easiest way to move forward was to delete a bunch of stuff and re-install. The stuff included lots of Common Lisp libraries and ASDF-Install made it pretty easy to get everything going again... Except that it should have been much easier.

Like most lispers, I've wanted Lisp to become more popular and have been frustrated as other languages have gotten the buzz. I've also wanted to see Lisp improve (sound familiar). Libraries help. Tools like ASDF-Install or Peter Seibel's Lisp in a box help. Open Source efforts like SBCL or OpenMCL help. Maybe we need a new language (though we've tried that too).

Certainly Lisp (and Common Lisp) are seeing a resurgence in activity and love. How do we turn that into community and turn that into productivity? I don't know (and I often doubt that I have time to figure it out!) but I'm sure openness, participation, authenticity and trust are key. Today, I purchased the domain enterpriselisp.com (think big <smile>). My plan is to start taking small steps towards building and creating a platform for Common Lisp community -- now I'm in deep: I've said it and said it publicly.

First, what this isn't: it isn't the CLiki and it isn't ALU and it isn't the CL Directory and it isn't Common-Lisp.net. Those are all useful and important and shouldn't go away. I'm hoping to model enterpriselisp.com on something like SpikeSource: a site where libraries will get tested and integrated and supported. I have a bunch of vague noodling written on the back of napkins for how I think it will get put together. Soon (probably in the geological sense), I'll start to add details. Feedback and ideas are welcome.


Wearing Lisp
Sunday, April 23, 2006

LispVan got to hear Norman Jaffe talk about Wearable Intelligent Systems and all I got was reminded of the Extended Mind. That's probably not a great T-shirt slogan but it sounded like a great meeting and it is a great book.


Speaking of Daring Fireball: it's the Interactive Dingus
Sunday, April 23, 2006

John Gruber's Markdown Dingus is the top Google search result (try it for yourself). It provides a nice overview of Markdown syntax and lets you experiment with it. Unfortunately, it's so Web 1.0.

I decided to improve matters by writing a little "finger exercise" using Araneida and Javascript. Araneida doesn't actually do anything more than shell out to the markdown Perl script but that will have to do until I get back to CL-Markdown (and since CL-Markdown isn't in any of my critical paths, that may have to wait a while...).

The Interactive Dingus is being served from a dynamically maintained IP (using DynDNS) so I can't promise high availability. I'm also a bit of a CSS hack so I'm afraid that although the pages work great in OmniWeb and Safari, they don't format very well in FireFox. There are several other minor annoyances (e.g., the system doesn't notice cut and paste) but it's good enough for a 1.0 Web 2.0 applet. Comments and corrections are very welcome.


Congratulations and Best Wishes to John Gruber
Sunday, April 23, 2006

John has cut himself free to express himself. Awesome.


On being reminded that I'm stupid -- again
Friday, April 21, 2006

Maybe it's actually the joy of flagellation that keeps us coming back for more... I'm working on a simple Araneida-backed AJAXy thing for fun -- just a programming finger exercise really -- and spent a good 20-minutes wondering why things weren't working at all before I realized that I was loading the HTML file in my browser (i.e., file:/// ...) instead of retrieving it from the server (http:// ...). Oh boy!


What makes programming so fun?
Friday, April 21, 2006

My wife sometimes -- often! -- complains about the amount of time I spend staring and typing on my Powerbook. "What is it?" she wonders, "that makes that thing so interesting?" I wish I knew.

Why is it that some humans get "it"? Why do only some of us find this frustrating business of hacking form out of recalcitrant bits an endeavor that almost always leaves us wanting more? I don't think it's just a power urge (look, I control the virtual universe inside my computer) or basic misanthropy (heck, I like people!).


I was away, now I'm back
Friday, April 21, 2006

This week is school vacation week in Massachusetts and my youngest son and I just spent three days in Vermont. The weather was wonderful and we had a great time. Because we stayed in a cheap hotel, I was off the net the entire time. Weird!


Proposed :system-applicable-p
Monday, April 17, 2006

I'd like to propose using a property of ASDF systems to designate which Lisps and platforms a given system is supposed to work under. I'm thinking of using :system-applicable-p (but I'm open to better names). This property would be for use only by ASDF-Install-Tester / ASDF-Status and would help improve the utility of the results page. Adding it would, of course, be entirely voluntary and it's absence would not alter the running of regular ASDF-Installs or ASDF operations. Thoughts? (man, I gotta get me some comments).


Confluence
Sunday, April 16, 2006

I just heard about SpikeSource (listening to another Jonathan Schwartz interview). It's a company making money by integrating Open Source projects into a tested, quality assured "stack" and selling the service.

I also came across Mark Samuel Miller's dissertation on E:

When separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference.

and I've been working on ASDF-Status and thinking a lot about the mixed blessing of choice in libraries and of dependencies between libraries.

It's all part of the cosmic unconscious...


Updated ASDF-Status update
Sunday, April 16, 2006

Having received some feedback and added a dash or two of my own thoughts, I want to revisit my previous post about ASDF-Status. ASDF-Status is a tool to help library developers know when and where (and perhaps why) their system is failing and a tool to help lispers know what systems are out there for their platform. Many library developers don't intend their libraries to function on all platforms -- the hassle factor may be too high or the tool may be Lisp implementation specific. Currently, there is no agreed upon syntax for a system to say "I only work on Windows Vista running Lisp Works 2000" (or whatever) so ASDF-Install-Tester (and ASDF-Status) blithely tries everything everywhere.

What's more, systems can fail to install in a large, though finite, number of ways and ASDF-Status provides a very coarse description of what works and what doesn't. A putative future version may improve on this (thereby greatly improving its utility). The heart of the problem is that a system may fail to install under ASDF-Install-Tester for the most trivial of reasons (perhaps a bit of interaction was required... or perhaps a given Lisp runs by default with very stringent compiler checks...). That same system may install happily or, at least, easily for some lisper in the wild.

All that being said, I think ASDF-Status is a useful tool (if nothing else, those 404s should give someone pause!) but needs to be taken with a very large crystal of salt.


ASDF-Status update
Sunday, April 16, 2006

I've re-run ASDF-Status on 3 versions of Allegro, two versions of OpenMCL, one version of SBCL and CLISP (all on OS X). There are 18 new systems

but the overall results are still pretty poor across the board. OpenMCL (version 1.0) does the best with 152 successful installs (out of 242). It is followed closely by Allegro 7.0 and 8.0 and then SBCL 0.9.9. Allegro in Modern mode scores the worst with only 92 successful installs (this isn't surprising and since 'regular' Allegro does so well, correcting these systems is probably just a matter of fixing the case of a couple of symbols.)

Every time I look at ASDF-Status's output, I get about a 1000-ideas of ways that it could be better and more informative. Perhaps I'll actually get around to improving things before I start running it again. I also want to throw in another plug for the Tinaa produced documentation. I hope to add some graphs and improve the display of internal and external symbols next. Other ideas are welcome.


Leadership and followership
Sunday, April 16, 2006

When I read the following in Authentic Business I slapped my head and said "of course".

The fundamental weakness of hierarchical leadership is followership. The stronger a hierarchical leader is, the greater the weakness they create in others. By taking responsibility for something, you take it away from someone else, and a person without responsibility reverts to childhood neediness. The conditioning of our society is to follow and, faced with a strong hierarchical leader, most of us will settle back and let them take responsibility for us.

I think that this is important and often unrecognized both in society and in business. I also think that this is one of the many reasons that Open Source and Agile methods are so effective. They are empowering.


Maps
Sunday, April 16, 2006

Michael Gastner and Mark Newman have produced a new method (pdf) for creating density equalizing maps (remember these) using diffusion. Their method is "conceptually simple and produces useful, elegant, and easily readable maps. Their paper has been put to work by the University of Sheffield's WorldMapper site. There are some cool maps.

I'm not sure how expensive the math is (I haven't read the paper yet) but this could be a cool interactive demonstration project.


Jonathan Schwartz at OS Con 2005
Saturday, April 15, 2006

I just listened to Jonathan Schwartz being interviewed at OS Con 2005. There was what sounded to me like some initial hostility but Schwartz held his own and did a remarkable job. My favorite bit occurs towards the end when the interviewer is asking about why someone would use Open Office rather than some other office productivity package. Schwartz asks the interviewer if he'd ever used Open Office and the interviewer replies:

Yes and I found it to be somewhat slow and buggy and I think that people would rather pay for software that is ... slow and buggy in different ways.

Everyone started laughing!


ASDF-Status, meet Tinaa. Tinaa, meet ASDF-Status
Friday, April 14, 2006

I doctored up ASDF-Install-Tester so that it runs Tinaa on every system it tests. If nothing else, it gives my computer something to do and I now have 83-Megabytes of remarkably redundant HTML! Aren't computers wonderful?! (It's only about 24-Megabytes of physical size but it takes up 83-Megabytes of disk... damn wasted partial blocks).

More seriously speaking, I see this as a way to help bullet-proof Tinaa and test out new ideas for presenting overviews of systems. One thing that is immediately clear is that there needs to be a better way to combine multiple Tinaa runs into a single whole -- there's just no good reason to re-document the same sub-system every time it is used. I think that CL-PPCRE is wonderful, but that doesn't mean that I want to see it documented 14-times! One side effect of improving this sort of "global" documentation is that it may make it more clear how to carve nature at the joints and find some of the bits that could be put together into libraries that everyone could agree on (as I said, I remain an idealist!).

(By the way, if you'd rather not have your stuff documented in this fashion, please let me know. Also, several systems don't appear because of some current bugs in ASDF-Install-Tester. I'll update this as things improve).


Lisp and life as of 14 April 2006
Friday, April 14, 2006

I've been hacking on a variety of small projects the last few days both in code and in my life. I'm looking for the big strike but have to settle for slow progress. I just keep reminding myself that "if it was as easy as I'd like it to be, it would have already been done!". In any case, and just for the sake of documentation, here's a partial list of progress (not all of which has been committed):

  • Added an :equality-test option to LIFT testsuites. This lets you specify the default test to use in calls to ensure-same.
  • Improved Tinaa in various small ways
  • Updated ASDF-Install-Tester to handle Allegro's "modern" [sic] mode and begun running tests (on Allegro 7.0, Allegro 8.0, CLISP, OpenMCL (both 1.0 and the latest from the CVS repository), SBCL 0.9.9 and SBCL 0.9.11). All of this is on OS X.
  • Plugging away on learning Ruby so as to do what I can to help with Montezuma.

I've also been reading a wonderful book by Neil Crofts entitled Authentic Business. I'm currently self employed and want to be doing something that makes the world a better place (still idealistic after all these years) that also lets me feed my family. I'm hoping that some of the ideas in Crofts' book will help inspire me. More on it later after dinner!


a funny thing happened on the way to copying my address book
Wednesday, April 12, 2006

I wanted to copy all of my Address Book data from my PowerBook to my wife's eMac (no fancy, smancy synchronization for me!). No problem, I made a backup of the data from my Address Book and copied the (6.3-megabyte! what is in this thing) file to the eMac. I then fired up Address Book on the eMac and selected "Revert to backup." The computer said "beep" but no new addresses appeared. I tried it a few more times on the theory that if once doesn't work, try again. No joy. I then dragged the backup file from Path Finder to the Address Book. I got the dialog about importing but I didn't get any addresses. Since the UI had failed me, I quit Address Book, opened the backup file (it was a bundle) and copied the contents to ~/Library/Application Support/Address Book. Then I restarted Address Book and bada boom, bada bing, there were the addresses.

So why didn't the backup/restore thing work? What if I didn't know what I was doing (some of the time!)? Would the backup I made have restored on my machine (is it machine specific? Seems pretty wacky to me and potentially pretty bad.


metacopy loses a dependency
Tuesday, April 11, 2006

I've removed metacopy's dependency on metatilities-base. Metatilities is sad, but kids have to grow up and leave the nest someday. I also added a test system and some simple tests.


Tinaa and ASDF-Systems
Monday, April 10, 2006

Tinaa can now make a passable shot at documenting ASDF systems. Heretofore, Tinaa placed the root part (i.e., the thing you asked to be documented) at the top-level and placed everything else under it. Now, however, Tinaa treats all of the name holders (an ill defined concept that means a part that maintains the names of other parts. In practice, these are systems and packages) similarly and places each of them at the same level in the hierarchy. It then creates a single table of contents page that points to each of them. There are many other minor improvements in both styling and output to be found as well so please have a look at this sample of Tinaa applied to Tinaa and let me know what you think.

By the way, thanks go to Todd Mokros and Cyrus Harmon for some great Tinaa patches and bug reports.


Speaking of ASDF Status... color redeux
Sunday, April 9, 2006

Back when I first put up ASDF-Status, I had some troubles with choosing colors that provided good contrast for color blind viewers. I spent a little time trying to sort things out but never felt happy with the results. Today, I came across Graybit - "an online accessibility testing tool designed to visually convert a full-color web page into a grayscale rendition. Cool.


ASDF Status desiderata
Friday, April 7, 2006

I've been slowly moving back towards ASDF-Install-tester and ASDF-Status. Part of the reason they've been slow to improve (aside from my being too busy by half) is that I'm dissatisfied with their putative architecture. I'd like this pair to be tools for the ASDF-Install portion of the Lisp community and there are too many kludges and half-assed solutions in the current implementations for that to happen. Here are some of the things I think would be good.

  • Better data management - save lots and lots of data every time AIT runs. Use this to track historical trends.
  • Don't keep checking a dead horse - don't test a system unless it (or one of the systems on which it depends) has changed. This might tie in with ASDF-Upgrade.
  • Better communication - provide RSS feeds of everything, selected sub-systems, selected Lisps, etc. Let system authors sign up to receive alerts if a test fails.
  • Share the load intelligently - AIT should be provide an assembly line mechanism whereby it can dole out tasks to computers that are donating CPUs for testing. Tests don't take that long to run so that is more about testing on many platforms rather than worrying about CPU overload.
  • Let system authors ping AIT to start tests immediately (this is more a wishlist item since it ought to be possible for system authors to setup AIT on their own boxes but anything that adds a barrier to entry is a bad-thing).
  • Do more than just install. Many systems have a test-op. Use it if its there and report the results. The the compiler output to help system authors find out what went wrong where.

I'm sure that there are more thoughts out there in the community (let me know). I think that there are even other ideas I've had that are already lost in the ever-deepening flow of synapses and memories from my skull.

The short sharp shock is that I'm intending to smush SQLite into ASDF-Status and integrate ASDF-Install-Tester with a lisp based web server (probably Araneida). This makes for a much bigger project (but also a much more interesting one). I'll keep the inquiring world informed!


First NASA, now NOAA
Friday, April 7, 2006

I say whoa. Science is about free expression not politics. Of course, everything is political.


My graphics rock
Friday, April 7, 2006

As in, I have the graphics skills of a rock. That said, here's a badge for Allegro Common Lisp. (Please) feel free to improve it (once you stop laughing -- !)


Grapher Server
Wednesday, April 5, 2006

Franz asked me to try my hand at tying together AllegroServe and GraphViz. I decided, perhaps foolishly, that this would be a good time to figure out how to use javascript and step onto that AJAX bandwagon. Thanks to Franz's AllegroServe and Common Lisp, that turned out to be a very enjoyable ride.

The result is Grapher Server: an AJAX web application that uses CL-Graph to provide two interactive examples of graphing fun. The first lets you experiment with several random graph generation algorithms and the second provides an interactive class browser.

The application is being served from an older G4 running on my home network and using the Dynamic DNS service to resolve the metabang.gotdns.com. Sometimes, the update daemon running on my computer gets confused and hands DynDNS the wrong IP; sometimes my kids shut the computer down <whoopts, sorry dad>. All that is to say that this may not be the most reliable of servers but it will have to do until I switch to a hosting provider that can handle Lisp! I'm told that Franz hopes to get this up on their servers at some point <cool> and I'll certainly let everyone know if that happens.

I also intend the release the code I used and do some more explaining but I'm not sure when I'll get to that. In the meantime, enjoy.


Tinaa likes SBCL more
Wednesday, April 5, 2006

Thanks to Cyrus Harmon and some messing about with logical / physical pathnames, Tinaa is now much happier with SBCL.

Cyrus also made it clear that Tinaa suffers from (at least) two other problems:

  • A description of how exactly to use the thing is conspicuously missing from the documentation
  • There are a lot of dependencies on other libraries.

The first is easy to remedy: just do (document-system <system-kind> <system-name> <destination-root>) at a Listener. The system-kind can be 'package or 'asdf-system or whatever (though asdf-system isn't very well supported yet) and the name is the name of the system (!). I'll add that to the web site and readme at some point (soon, I hope).

Dependencies are a more difficult issue, especially when many of them are in flux. ASDF-Upgrade, ASDF-Install, etc are all helpful but they don't feel like they are quite enough.

No new ideas here today from me though: I've got other things to do.


Another good use of simulation...
Tuesday, April 4, 2006

Bird Flu! Sounds like a fun project.


Tinaa doesn't like SBCL much
Tuesday, April 4, 2006

I now have SBCL 0.9.11 running (yeah!) and hope to look into some Tinaa issues. Here are the problems that it has with SBCL:

  • Class documentation is missing slot lists
  • The contents link is missing from all part pages
  • The TINAA:CANNOT-MAKE-PART condition is missing its documentation
  • Links to style sheets often point to one wrong level of the hierarchy
  • (part-symbol-name part) sometimes returns lists (which makes Tinaa mad)

Once I finish these, I'm intending to go back to documenting ASDF-Systems more nicely.


Earthquake!!!
Tuesday, April 4, 2006

From American Scientist:

Computers Provide a New Look at a 100-Year-Old Disaster

A diverse team of geophysicists and mathematicians announced a computer simulation of the great earthquake that rocked San Francisco in 1906, an effort that they hope will inform precautions for future earthquakes in the area.

"The is the most comprehensive and up-to-date picture of how the ground shook nearly 100 years ago," said Mary Lou Zoback, coordinator of the U.S. Geological Survey's earthquake hazards team.

The new simulation took two years to create, using supercomputer time at Stanford, UC Berkeley, the Lawrence Livermore National Laboratory and URS Corp., a Pasadena engineering firm.

Research geophysicist Brad Aagaard said the speed of the 1906 quake was "phenomenal," traveling 300 miles along the San Andreas Fault at up to 13,000 mph. Much of the city was destroyed within 4 seconds. "I'd be under the nearest table the second I felt the first shudder," he said.

Among other sources, Aagaard said his team used data from scientists who began studying the 1906 quake just three days after it struck. A full video of the new simulation will presented next month at a joint conference sponsored by the Earthquake Engineering Research Institute, the Seismological Society of America and the California Office of Emergency Services.

There is even a video.


Stop It!
Monday, April 3, 2006

For them's that care, there is also a new version of my Stop It! widget out. The biggest improvements is better graphics but there are also the required number of bugs fixes and tweaks.


Tinaa update, etc
Saturday, April 1, 2006

I've updated all the various metabang packages I support. Most of the patches are minor bug fixes, etc. Tinaa, on the other hand, saw a few actual improvements. The most exciting of which is that it now runs on SBCL (0.9.10 or higher). Thanks go to Todd Mokros for helping to find some of the final necessary touches. That said, Tinaa still doesn't quite behave as it should on SBCL. I'm going to be looking into that as soon as I finish getting 0.9.11 running on my PowerBook..


Rate-It!
Saturday, April 1, 2006

Version 1.1 of Rate-It!, my OS X / iTunes music ratings widget is out. Prettier, less buggy, sucks less!


Time Present, Time Past
Friday, March 31, 2006

This memoir by Bill Bradley is a thoughtful and thought provoking piece of work. I listened to the book on tape (sadly, it was the abridged version) read by Bradley himself. His reading is workmanlike but what he is saying is anything but. Hearing him go over the problems we faced as a country in 1997 (all of which have only gotten worse) and carefully explaining the sort of honesty and integrity that would be needed to solve them is both heartening and horrific. Heartening because there are people who get it: who understand how things are interrelated and know that nothing is easy. Horrific because they are too few and because thoughts like these -- thoughts that require work, sacrifice, and change -- are hard to hear and harder still to heed. Highly recommended.


Yes another sign of my growing control of the entire world
Friday, March 31, 2006

<smile>

I'm not actually a megalomanic. Really!


Ducks yesterday, frogs today
Friday, March 31, 2006

I live in Amherst MA and there is a small vernal pond behind my house. Every year, there comes a day when what was silent bursts with sounds and life: the frogs have woken and and want to play and have babies. It's wonderful.


Javascript / DOM / browser silliness
Thursday, March 30, 2006

I've been working on a small project for Franz using AJAX and other happy buzzwords. my goal is to place the application on one of my home computers (metabang.gotdns.com -- a site that isn't particularly complete or maintained but I can use it as a Lisp server...). The application serves up HTML pages with images and imagemaps. When you click on various parts of the image, some AJAX happens to replace the current image and map.. It's pretty nice and everything worked fine in my browser of choice (OmniWeb -- love those tabs and workspaces!) but Firefox and Safari didn't quite click. In Safari, every other image served would work. In Firefox, only the first image served was happy. Here's the Javascript code that updates the imagemap:

if (placeholder) {
	   var mapString = getXMLDatum(root, "map");
	   if ( mapString && mapString.length > 0) {
			   placeholder.innerHTML = mapString;
	   } else {
			   placeholder.innerHTML = "";
	   }
}

Not much to change there but on a whim, I pulled the "set to empty string code out of the else":

if (placeholder) {
	   var mapString = getXMLDatum(root, "map");
	   placeholder.innerHTML = "";
	   if ( mapString && mapString.length > 0) {
		placeholder.innerHTML = mapString;
	   }
}

Believe or not, this change was enough to convince Safari to work on every image map. (and If anyone knows why, I'd like to hear about it). Firefox, however, could not be mollified so easily. I used the built in DOM inspector tool and it turned out that the map area information was being modified. So, I reasoned, it must be that the image wasn't noticing... The code to update the image was the simple:

if (imageEntity && baseName) {
	imageEntity.src = "./temporary/" + baseName + 
	".jpg?time=" + now.getTime();
}

I changed this to:

if (imageEntity && baseName) {
	imageEntity.setAttribute("usemap", "");
	imageEntity.src = "./temporary/" + baseName + 
	  ".jpg?time=" + now.getTime();
	imageEntity.setAttribute("usemap", "#G");
}

This did the trick (which, I assume, is a rather odd reference to the game of bridge?). If you ask me, neither of these things should have be necessary (and weren't, after all, for OmniWeb which, if I recall correctly, uses the same WebKit rendering image as Safari so... end the sentence and go figure.)

The moral of the story is the usual one: "software sucks".


He's not talking about Lisp, but still
Wednesday, March 29, 2006

Jonathan Rentzsch talking about WebObjects on its tenth birthday:

You need to weigh the pleasure of knowing the Better Way versus the pain of Not Being Able To Use It.


Processing
Wednesday, March 29, 2006

Processing looks interesting. Has anyone explored it? I've downloaded the demo but haven't had a chance to do anything with it yet.


Nice UI touch
Wednesday, March 29, 2006

Who says text can't have a UI (*)? I was looking a bit more closely at Brian Mastenbrook's colorize package and noticed this little gem:

(when (> (- (get-universal-time) *last-warn-time*) 10)
  (format *trace-output* 
          "Warning: could not find hyperspec map file. ~
           Adjust the path at the top of clhs-lookup.lisp ~
           to get links to the HyperSpec.~%")
  (setf *last-warn-time* (get-universal-time)))

Only giving the warning once every 10-seconds is a nice touch.

(*) Frankly, I've never heard anyone say that but it sounds provocative, so what the heck!


Bind bound
Saturday, March 25, 2006

A month or two ago I changed bind's name from metabang.bind to metabang-bind (because I happen to like logical pathnames (*) and logical pathnames don't happen to like #\.s in a component... and SBCL is a stickler for that kind of thing). In the process, I failed to change some of the references and I didn't think to just use a symbolic link... so some people have been getting out of date code.

Thankfully, someone was kind enough to tell me that bind wasn't working so that I was able to figure out the error of my ways.

(*) At least in the unrealized ideal...


DOM Scripting
Friday, March 24, 2006

Jeremy Keith's book on DOM Scripting (whose title eludes me...) is a great read for people who want to better understand how to put the three legged stool of content (HTML), style (CSS) and action (Javascript) to good use in the modern web. It has good examples, not too much fluff and a clear writing style that makes it easy to get through. My only two complaints are the overabundance of screen shots that don't add much of anything to the text and the tendency to repeat bits of code snippets and advice. I have a feeling that both of these are considered de rigeur in the modern age of instant learning (I've often wondered if a series themed X for smart people would sell or not?).

I hadn't done any real stuff since about 8-years ago (ouch, I am getting old) so this was a wonderful reintroduction to the possibilities. I recommend it.


Announcing: metacopy
Thursday, March 23, 2006

Metacopy is a Common Lisp deep/shallow copy tool. It lets you specify how to treat the slots of a given class (shallowly set them or deeply copy them) using the defcopy-methods macro. You can them copies of objects using the copy-thing method. Documentation, Darcs repository, tarballs, CLiki page and ASDF-Installability are where you'd expect them to be. (Thanks to common-lisp.net for continuing to be a great place to use.)

Metacopy uses ASDF-System-Connections to attach itself happily to CL-Variates, Cl-Graph and CL-Containers. This means that if you load one of these systems and then load metacopy (or the other way around), then the random-number-generators, graphs and containers will all be copyable without further ado.


Mercator: A Scalable, Extensible Web Crawler
Wednesday, March 22, 2006

Even though 1999 was a long time, this paper on building a web crawler seems like a nice introduction to the problem. The authors limn the various challenges in building any crawler and the additional ones that come from building one that can handle the ever growing World Wide Web. They also describe many of the extensions that needed to add to Java in order to support the very large data structures required. There is even mention of bloom filters.

All in all, a nice ride for the train or bus and one that leaves me wondering "why not do this in Lisp?" Would it scale as well? Would it be easier to build? Maintain? Extend? I'd like to think so...


More mlist[backspace]p madness
Tuesday, March 21, 2006

I think I've actually finished converting my code to use mlisp (though I thought that before). Most of the conversion is painless but there were a few all uppercase keywords scattered around and :keyword doesn't eq :KEYWORD (by default in mlisp). I also don't quite understand the best way to handle find-symbol in an case independent fashion. If I just find-symbol on a string, it will only find it in mlisp if I lowercase it and it will only find it on other lisps if I uppercase it... I could wrap things in some readtable case stuff but that seems hackish to me.


a personal grumble
Tuesday, March 21, 2006

A week ago, I released Lock-It!, a simple widget that locks an OS X computer by switching to the login window (it just calls the CGSession application). Today, I find that someone else released the equivalent widget. That's OK. It's more than likely a coincidence and certainly wasn't all that novel an idea. The thing that irks me is that Apple decided to make their widget the "featured" widget. How does that work?! (Their widget also looks nicer and they have a prettier website... grrrrr).


plist weirdness (from my perspective)
Tuesday, March 21, 2006

I use a variable in my widget's property files to turn debugging on and off (where debugging means writing to the console or not). This morning I spent a few minutes writing a script so that I could turn debugging on and off from the command line without having to open Apple's property list editor. The script (which uses Apple's defaults command) works just fine but it doesn't alter the behavior of the widgets. To summarize:

  • If I modify the plist file with Apple's property editor application, then the widget's behavior changes
  • If I modify the plist file with defaults (from the command line), then the widget behavior doesn't change (though the date/time of the plist file does change).

My guess is that the Dashboard environment isn't reading the changed property unless I modify it with the property list editor... but I'm not sure how either to verify this or get it to do my bidding (the goal of all programmers... absolute power <smile>)


another ASDF-Patch (20 March 2006)
Monday, March 20, 2006

I'm slowly catching up on the ASDF Patches I've been sent. Today, the patch consists of the following:

  • Updating download-files-to-package handle changes in Allegro's reader macro behavior (Thanks to Robert Goldman)
  • Fixed a typo (ckeck -> check) (thanks to Robert Goldman)
  • ASDF-install now saves trusted UIDs in a file (thanks to Robert Goldman)
  • ASDF-install now only loads the packages you ask for instead of every package that gets downloaded.

The last bullet point is a change in ASDF-Install's behavior. It fixes a bug where ASDF-Install tries to load every single system file that happens to be downloaded regardless of whether or not you asked for those systems to be installed.

Here is the diff.


ASDF-Install patch (17 March 2006)
Friday, March 17, 2006

Though I've been the official ASDF-Install maintainer for the last several months, I've done precious little maintaining. Thank Goodness that Edi Weitz handed me the project in such good shape!

Today, I'm going to be committing my first patch. It allows ASDF-Install to work properly with Allegro's "modern" Lisp. Here is the diff in case you're curious.

There have been several other patches submitted and a few more home grown ones that need to get into the trunk. That should happen very soon.


Keeping me honest
Friday, March 17, 2006

Christophe Rhodes sent me an e-mail to correctly chide me for several misstatements in my recent "modern" Lisp post. Lisp has always been case sensitive; it just usually transforms case to upper so that it doesn't feel that way. I knew this but I was being lazy in my writing. He also takes umbrage at the term "modern" and, to be honest, I've never liked it either. Having a Lisp that talks more easily to other languages is a good thing but that doesn't make it necessarily better or more new. Thanks for keeping me honest, Christophe!


Confused Quick review: Paparazzi!
Thursday, March 16, 2006

Paparazzi! lets you create images of web pages. I guess that that's cool but I don't think I really get it. Why would i want to create an image of a web page? Why not print to PDF? Why is this a popular application? What do people find so useful? What am I missing?


Good error message: Kudos to Franz
Thursday, March 16, 2006

All the Lisps I know of warn you if you refer an unbound lexical variable. Allegro goes one better and looks in other packages:

Warning: Free reference to undeclared variable *random-generator* assumed special. This symbol may have been intended: cl-variates:*random-generator*.

All we need is a button that lets me add the package reference or import the symbol or whatever and Bob would be our uncle (which is a very weird but enjoyable saying!).


Being modern
Thursday, March 16, 2006

I've been working on a very small project for Franz (more on this something next week when I have time to write). Today, I've been buffing my code so that it compiles and runs happily in Modern Common Lisp (i.e., the case sensitive kind). I'm actually against case sensitivity in programming languages and file systems (I want case to be recognized and remembered but not used in lookups or in sorting) but think that Franz's argument that case sensitivity helps interface to other languages is a pretty hard one with which to argue.

In any case, the code conversion has been mostly pain-free. Two of my habits will have to change:

  • I like to use all uppercase strings in my defpackages and
  • I like to use all uppercase symbols in my #+ and #- reader macros

and I've had to muck with some old code (mostly in CL-Mathstats) that used mixed case and erratic case (i.e., some of the variables were named in upper case and then referred to in lower case <ugh>) in some of the variable names. There are a few other things but nothing too serious.


That language isn't a parrot
Thursday, March 16, 2006

Lisp is making a comeback! It's sexy, books about it are award winning and even though Zach Beane uses the word dead in close connection with it (hence the parrot reference <smile>), Lisp is a language that met it's maker a long time, spun metacircular and keeps on coming! Congratulations to Peter Seibel and the Lisp community!


Expression engine...
Wednesday, March 15, 2006

I heard about ExpressionEngine via one of ITC podcasts of BlogHer 2005. Today, I took a look. My only comment so far is that they made it harder than necessary to download and evaluate their product. I don't mind having to register (much) and I understand that they (the nameless "they") have an interest in knowing who is interested but ... why make me:

  • navigate to the download link only to be told I have to register
  • get me to register so that I get an e-mail which I can use to activate my account (I have an account! I just wanted to download something; why do I need an account?!)
  • return to the site, navigate back to the download link and be told that I must first login in order to download
  • login and be returned to the main page so that I
  • have to navigate back to the download page again.

That's too much. Now I have a bad taste about the site and their product and it just wasn't necessary. Place the download links in the e-mail; log me in when I activate my account, whatever but please, don't waste my time.


The paradox of choice
Tuesday, March 14, 2006

Harper's index:

A Dutch study found that 50 percent of the products returned to stores for malfunctions actually work fine but are just too complicated to use.

Barry Schwartz at PC Forum:

"People like a lot of stuff in their stuff. But after using products with mulitple features, preferences switch to simpler models," he said. The problem is that people don't seem to know this about themselves, "They want capability but get satisfaction out of usability—at the moment there is a tradeoff between the two."


One of the most important lessons in debugging anything
Monday, March 13, 2006

(At least, it is in my opinion). From Will Shipley:

What's the next step? Check your assumptions. Never say, "Well, I did blank, I know I did blank, blank is done, and that's the blank story!" Look at the actual code that does blank, and make DAMN SURE it really blanks.


Color
Monday, March 13, 2006

I just finished looking at a very interesting "introduction to color" presentation by James C. King from Adobe. The one I have is from 1998 but the concepts haven't changed much in the last 8-years (gosh, time flies). The presentation focuses on how human eyes see color and the invention of the CIE color scales. It was pretty interesting.

One other thing to note is that he relies heavily on annotations and OS X's preview sucks with annotations: there is no way (that I found) to look at them without clicking (and resizing!) each one.


Allegro non-crashing crash reports
Friday, March 10, 2006

I've been using Allegro 8.0 on OS X 10.4.5 with EMACS and SLIME and having a great time. I've also been using the console to print debugging messages while working on widgets. It was with surprise that I started noticing messages about "alisp" crashing since, as far as I could tell, Allegro wasn't doing anything of the sort. I wonder if this is like the SBCL non-crashing crashes that John Wiseman noticed a little bit ago. Here is what a crash report looks like.

Update: Franz says that this is a known OS X problem (see this tech note for details). They hope to have a work around for Allegro at some point and are going to be adding a new FAQ about it. Hooray for good technical support!


Giving a tinker's cuss for the struggling artist
Wednesday, March 8, 2006

I've been spending some of my time recently working on wriggling widgets. Javascript has its quirks (so does Lisp!) but it's not too bad. The real problem for me is that my drawing skills are pretty darn primitive. It's fun seeing what some of the more skilled widget crafters have put together but I can only envy them, not emulate. The trouble is that I've quite a few widget ideas that go above and beyond the basic box that I can manage (OmniGraffle is my friend!). If anyone with artistic skills is interested in collaborating with me (or knows of someone I could contact), please drop me a line.


Beating an adjustable horse
Wednesday, March 8, 2006

I hear from various sources that CLISP and LispWorks both behave as Allegro does (here and here) with respect to delete and express adjustability. I confess to not having thought about it (*) but I can't quite see where the efficiency comes from in having vectors potentially lose their (express) adjustability during a delete. I would have thought that the (naive?) method of swapping the items to be deleted with the items at the end and then shrinking the size would be the fastest general method. And this method, it would seem, wouldn't alter any other vector properties. I guess this is just another question to add to my list.

(*) Saying "I haven't thought about it" is an intellectual's cover when he (or she) is worried that they are about to step in it!


I can adjust
Wednesday, March 8, 2006

Zach Beane helped to clarify my surprises with delete a bit via e-mail (thanks). One issue was my fuzzy recollection of the differences and connections between simple array, fill pointers and adjustability. Another is the issue of expressly adjustable versus actual adjustability. The hyperspec says that simple-array is:

The type of an array that is not displaced to another array, has no fill pointer, and is not expressly adjustable is a subtype of type simple-array. The concept of a simple array exists to allow the implementation to use a specialized representation and to allow the user to declare that certain values will always be simple arrays.

When you delete from a vector, the array you get back may not be expressly adjustable anymore -- you can still call adjust-array on it but this may cause a copy to occur in order to do the adjustment. (Which, in my opinion, is sort of like claiming that everyone can fly as long as you don't care how hard they hit the ground when they land...).

My issue was (and is) that I was using delete multiple times and didn't expect it to alter the actual adjustability of my vectors. This seems like a rough edge in the standard that should be improved in the Common Lisp 2007 standard.

(In case anyone is wondering why this just cropped for me it's because I'm a long time Macintosh Common Lisp user and all of MCL's arrays are adjustable. SBCL has non-expressly adjustable arrays but the vector returned when you delete from an expressly adjustable array remains adjustable. So it wasn't until I started running code in Allegro that I ran into this corner. Live and learn (and complain [smile]).


not a del.icio.us followup
Wednesday, March 8, 2006

I received a bunch of great responses to my del.icio.us query. Thanks! Unfortunately, I seem to be in a state of constant frenzy and haven't had a chance to think them through. Stay tuned for exciting (and possibly wrong-headed) conclusions!


Probably not what they meant department
Tuesday, March 7, 2006

From American Scientist's weekly science in the news e-mail.

The robin is the favorite food source of the Culex pipiens mosquito, which carries West Nile.

If this is true, I would be really worried about those mosquitos.


Unpleasant delete surprise...
Monday, March 6, 2006

I was tracking down a problem in CL-Graph's random graph generation code that only seemed to be happening in Allegro. Here's the story.

It was a dark and stormy night. Somewhere, a dog howled. The Common List container library includes several classes that stand in for their Common Lisp counterparts (it also does a good job of providing methods so that regular Lisp data structures behave properly with the container library's methods but that's a different story). One of these is the vector-container. CL-Graph uses vector-containers to hold an array of edges attached to each vertex. When an edge is deleted, something very like the following gets executed:

(setf vector-edges (make-array 1 :initial-contents '(1)
         :adjustable t :fill-pointer t))
(setf vector-edges (delete 1 vector-edges :count 1))

The trick is that under Allegro, the vector returned by delete may no longer be adjustable. At first, I thought that this had to be an error. However, the Hyperspec says that

... If sequence is a vector, the result [of delete] might or might not be simple, ...

which, if I'm reading it correctly, means that Allegro is within the letter of the law. On the other hand, this also seems like a dumb thing. What if I want to delete something else? What if I want to adjust the array later? Why is Lisp like this (probably for efficiency and because this is what some vendor had already done before the standard writing got underway...).

More to the point, what do I do now? Do I make two calls to subseq (ugh?); do i swap the element to be deleted with the last element and then shrink the array (not hard but it irritates me that i have to think about this...). Is there some other technique I'm overlooking (I hope so even if it does make me look silly!)


How do you del.icio.us?
Sunday, March 5, 2006

I've been using del.icio.us for a while now, have about 500 bookmarks and a lot of tags. I generally use Cocoalicious rather than visiting the del.icio.us site because it's faster and prettier. Perhaps because of this, I rarely use any of the social parts of del.icio.us. It's just a big bag of my tags and bookmarks that I use for research or to store stuff to read for the putative rainy when I might have time to read... I'm curious, do other people use the social aspects (other people's tags, related tags, etc)? How do they use them? How do you use them?

My weblog is commentless (because, well, just because) but e-mailing me is only a click away! Thanks.


Quartz Composer
Saturday, March 4, 2006

I just finished a tech note on Apple's Quartz Composer environment and played around with it for a few minutes. I'm not technically competent (graphically speaking) to fully evaluate it, but from a beginner's perspective, it is a sweet piece of work. The tools is a visual IDE which is still pretty unusual. As the tech note says:

The first thing you'll notice about Quartz Composer is that it isn't like most development tools. Instead of writing pages worth of code to directly manipulate the various graphics APIs on the system, you work visually with processing units called patches. These patches are connected into a composition. As you work with a composition, adding patches and connecting them, you can visually see the results in a viewer window. Each and every change you make is immediately reflected in the viewer—no compilation required. This results in a development experience like no other.

(Actually, it results in a development environment like Lisp but I won't blame Apple for not giving our great language some advertising. )

The environment is in the same drag-and-drop-and-change-parameters-in-dialog-boxes style that Interface Builder uses. Unlike IB, however, QC makes it easier to see your connections and provides layering (sub-compositions). I wish that you had the option to seamlessly switch between the visual representation and a text based on (code, data, code is data, data is code...). Borland's Delphi had something like that eons ago and it was really wonderful to be able to use either mode interchangeably. It seems as if the building blocks of QC would make a nice little Domain Specific Language and that having such a language and being able to easily use it to create new building blocks would be a win-win-win.

If you have a Macintosh, I think you owe it to yourself to spend a few minutes checking this tool out. I think it brings a range of expression to the rest of us similar to that which Desktop Publishing and Spreadsheets brought so many years ago.


Getting Real
Friday, March 3, 2006

37Signals has written a book. It's on the web (as PDF, no dead trees).

Getting Real details the business, design, programming, and marketing principles of 37signals. The book is packed with keep-it-simple insights, contrarian points of view, and unconventional approaches to software design. This is not a technical book or a design tutorial, it's a book of ideas.

There are even several free chapters! Sounds like XP + heavy prototyping + great people. 37Signals makes nice stuff and they appear to have fun doing it. That sounds good to me.


We care about silly book titles
Friday, March 3, 2006

Such as People Who Don't Know They're Dead: How They Attach Themselves to Unsuspecting Bystanders and What to Do About It

And this is why it's completely indisputable that our way of life is just plain better...

As Dave Barry says, I'm not making this stuff up.


Funny because it's so true!
Wednesday, March 1, 2006

iPod versus MS iPod!


Seashore (not seaside...)
Tuesday, February 28, 2006

We may be running out of names...

I just came across Seashore, a Cocoa based GIMP. I've never used the GIMP myself partly because it required messing with X windows on the Mac (not that that is hard but it is one more thing). Seashore may change that.


swamped...
Monday, February 27, 2006

therefore... few posts. But you could guess that already, couldn't you?


Harry Potter and the Half Blood Prince
Monday, February 27, 2006

Though it's not great literature, J. K. Rowling has produced another enjoyable page turner in Harry Potter and the Half Blood Prince. That said, the book doesn't do much to resolve anything in Harry's odd world. The bad folk are up to increasing amounts of no good, the ones in charge are increasingly inept and Harry and his friends are increasingly beset by the tumultuous tides of youthful hormones. It's all fun stuff and if you like Harry Potter you'll like this too. (I know I sound as if I'm damning with faint praise; I suppose that's because I've always thought the Harry Potter books to be overly hyped. They are good fun, but they are not great. As Kafka said "Do we need books that make us happy? No! We need books like ice picks to break the frozen seas within us." Of course, Kafka was a bit batty so maybe we should just ignore him and go back to our Tivos and iPods.).


Another 15-minutes
Thursday, February 23, 2006

My Stop It! widget is now up on Apple's website. At least I can use it to time my 15-minutes of fame <smile>.


That's so weak
Tuesday, February 21, 2006

Bruno Haible has a nice write up on weak references.


Broken windows and leaky abstractions
Tuesday, February 21, 2006

Daniel Jalkut delves into the details of Apple's UI implementation and uncovers some interesting leaky abstractions in the process. Part of the problems he finds stem from their being multiple ways to access functionality and multiple levels that don't necessarily stand upon one another in nice layers. If I had the energy, I'd try to express this in terms of building useful DSLs that compose nicely (a very hard problem in general). Unfortunately, I don't so I won't.


Stop-It!
Tuesday, February 21, 2006

Not too long ago, I had the need to time myself so I went looking for a decent OS X timer application. There are a lot of them out there but none of them seemed simple enough or clean enough to make me happy. Since I was already interested in honing my DOM Scripting skills, I thought I'd try to hack a widget of my own. It didn't take long to make something that I could use but, fool that I am, I wanted to make it decent enough to offer to the general OS X using public. That took quite a bit longer than I wanted -- the old 80/20 applies as usual -- but I am pretty happy with the result.

Therefore, and without further adieux, I introduce: Stop-It! an OS X countdown timer for the rest of us.

When you're looking to time something, you don't want to fiddle around and you don't want a countdown timer that gets in the way. Stop It!'s clean, unobtrusive interface lets you start counting down with the fewest possible keystrokes and clicks.

If you're interested, you can download it or learn more from its webpage.


And I thought I had style...
Tuesday, February 21, 2006

SBCL does a nice job of noticing all sorts of little (and not so little) gotchas in your code:

; compilation unit finished

; caught 234 STYLE-WARNING conditions

; printed 52 notes

Time to start (!) mucking out the stalls.


Free the sounds
Friday, February 17, 2006

The Free Sound project is cool.


ILC @ Vancouver?
Monday, February 13, 2006

Bill Clementson proposes Vancouver as a great venue for a lisp / scheme conference. I'm all for it!


Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy
Saturday, February 11, 2006

Rethinking Public Key Infrastructures is one of those books with a first chapter that lures you in with the false sense of security that you will understand this book and then whacks you on the head in chapter two with dozens of definitions and theorems. I have to confess that I didn't make it past chapter two (though I really wanted to) because there is some complex and beautiful stuff happening here and I just don't have the time right now to assimilate it.

The key point of Brands work is that PKI and RSA are nice but not sufficient. At issue are these three truths:

  • persistence - any data we put out there will stay out there
  • loss of control - any data out there can be used for anything
  • linkability - any data out there can be linked

PKI/RSA gives us identity management but do not separate authentication from identification. Brands frameworks are built on top of PKI/RSA but allow for certifications that act like cash, subway tokens or stamps. Rather than putting trust in Certificate Authorities, it is the certificate holders that decide what to reveal and to whom to reveal it.

This is important work and a very important topic. If you interested, you should probably subscribe to Bruce Schneier's web log as he does a great job following both academic, industry and government happenings.


Ah, libraries
Friday, February 10, 2006

I was just about yet another person's struggles to find existing code that would solve their problem:

We started by looking at what kind of X libraries already existed that might prove useful for our own goals. This research brought up many different existing X-related projects. Some of the more useful ones we discovered were:

  • Long List...

At this point we decided it would better suit our goals to implement our own X solution. For the A and B libraries, we had to distribute multiple files from different authors, which didn't fulfill our simplicity objective. And each library had extra functionality that we didn't need, and especially with library C, that functionality seemed to unnecessarily impact performance. All of the libraries that we looked at seemed to have performance problems.

Sounds familiar, doesn't it.

The catch is that the language is Java and the topic is graph manipulation and display. Yes, Lisp has "library" problems, but once you step off the well trod paths, every language does.

Maybe Lisp has more problems because people can do so much in it and because people have been focusing on really hard problems so that some of the easy stuff (like sockets) has seen short shrift. This isn't to excuse anything; it's just to point out that all of the computer stuff is harder that it first appears and library development is very hard.


metabang software update
Thursday, February 9, 2006

Crazy times in the big city (ok, I live in a little town but that just doesn't sound catchy and you would have probably already stopped reading!).

  • I originally wrote bind in MCL and it was willing to let things like (destructuring-bind (a nil) (foo) ...) go blithely by. OpenMCL no longer likes that construct so I've had to be a little more creative. The point of constructs like this is to avoid having to add (declare (ignore foo)). It's handy.
  • A friendly patch to CL-Graph from Levente Meszaros adds more flexibility to its graph->dot facilities by including several new sub-classes and the infrastructure to make it count. I'll write up an example real soon now.
  • Tinaa has seen several good improvements including: much nicer style sheets, a permuted symbol index, lots of bug fixes, the start of ASDF package documentation and more. Oh, it now also emits a list of the things that don't yet have documentation.
  • CL-Graph (as mentioned) is better documented
  • I have several patches to ASDF-Install that are in the testing / waiting for me to make time for them stage, Soon, i promise, soon.
  • Everything else has been moving slowly along (as have I <smile>).

Keep those cards and letters coming!


Ignoring errors
Wednesday, February 8, 2006

Sometimes it's nice to be able to ignore-errors but it can be a tricky business when you forget that that is what you're doing! I just spent the last hour tracking down what appeared to be a bizarre bug. A function I was called kept aborting out early and returning nil? To compound the problem, i was doing some error handling within the function and its callers so I kept spelunking around in the wrong place. Finally, I managed the isolate the problem by splitting the errant function into littler and littler pieces until it had to succumb.

The real problem: I had an around method that was wrapping an ignore-errors around the call-next-method. I'm sure it had been a good idea once upon a time (though it's more likely that it was just expedient). Sigh.


Egg timers
Tuesday, February 7, 2006

I've been checking out some different simple timer programs under OS X. Here are some notes:

  • Yolk Dashboard widget: cute and semi-functional but way too many clicks required to set a timer up. Let's see: one keyboard to invoke Dashboard, one mouse acquire click (on a very small target) to get to settings, another mouse acquire and click on a text entry box (nothing is focused by default), some numbers to type, another click to close the settings and another click to start the timer! Ack. Worse yet, if you want to time something for 74 minutes, you must enter 1 in the hour field and 14 in the minutes field -- Yolk won't do the math for you.
  • Minuteur - This is a crazily full featured timer that everything from a pretty GUI to a full screen mode. The down side is that it starts the Finder up whenever it runs; since I'm a Path Finder junkie, this is a bad thing.
  • Deja Time Out - This isn't a timer program but it is pretty neat if you want something to remind you to relax your shoulders, stretch your back and look around to smell the roses. I haven't decided yet whether breaking my work flow like this is a Good Thing or not but if I decide yes, this is the program I'll use.
  • Pester - A very simple but competent alarm program. Probably the one closest to what I was looking for when I started looking though now that I've found it I'd think I'd like a bit more of GUI. Sheesh, some people are never satisfied.

Nice spam
Tuesday, February 7, 2006

I like this

Security Advisory: When you log in to your PayPal account, be sure to open up a new web browser (e.g. Internet Explorer or Netscape) and type in the PayPal URL (https://www.paypal.com/us/) to make sure you are on a secure PayPal page.

Of course the link didn't really point to PayPal.


humor (of a sort)
Monday, February 6, 2006

I'm a losey speler. I can generally tell when a word is spelled incorrectly but that doesn't help me figure out what correction is required.

In any case, I was writing a comment about some of my code's inadequacies and "inadequacies" didn't look right. I checked in my Dashboard dictionary but it wasn't found -- go figure -- so I googled it.

I was right about the spelling but the funny thing is the number one search result... I guess it's not a surprise. Sigh. As Nietzsche said, we're human, all too human.


Check that photoshop before you shop those photos
Monday, February 6, 2006

TMC has a decent article on science journals trying to stay on top of fraud by checking all the photos they get in Photoshop.

The same advances that have given consumers inexpensive digital cameras -- and software to easily copy, crop, or alter an image with a few clicks -- have also proven a temptation for unscrupulous researchers. Federal science fraud investigations involving questionable images have grown from 2.5 percent of the cases in 1989-90 to 40.4 percent in 2003-04, according to the federal Office of Research Integrity, which investigates scientific misconduct.


Physical-pathname-directory-separator
Monday, February 6, 2006

I couldn't immediately think of an existing way to determine the physical pathname directory separator in Lisp so I wrote this:

(defun physical-pathname-directory-separator ()
  (let* ((directory-1 "foo")
         (directory-2 "bar")
         (pn (namestring
              (translate-logical-pathname
               (make-pathname
                :host nil
                :directory `(:absolute ,directory-1 ,directory-2)
                :name nil
                :type nil))))
         (foo-pos (search directory-1 pn :test #'char-equal))
         (bar-pos (search directory-2 pn :test #'char-equal)))
    (subseq pn (+ foo-pos (length directory-1)) bar-pos)))

This works on my Mac under Digitool's MCL, SBCL, CLISP and Allegro. I still don't have a Windows box to test upon but maybe some happy reader can contribute! Alternatively, some unhappy reader can tell me the 100-ways in which the code above is wrong <smile>.

(BTW, I assume in the code that the delimiter could be a string with length greater than one. It's probably always going to be a single character which would make my life slightly more easy... Does anyone know of a file system that uses multi-character delimiters between directories? How about something that would disallow such a scheme?).

Update: Peter Seibel reminds me that I should probably be using #'char-equal, instead of #string-equal in my calls to search. Also, I have reports of success under SBCL and Windows 2000. I'll take that as a success.


LispVan Allegro Cache Presentation
Monday, February 6, 2006

I finally got around to watching Bill Clementson's LispVan presentation on AllegroCache. It's quite fun but I have to wonder what the reasoning behind the following choices was:

  • The use of 'defclass*" to define persistent classes (why no t something like defpersistent-class). The * form for defclass is already in semi-common use as a defclass that makes definitions less verbose by abbreviating all that :initform, :initarg, :accessor, :reader, etc. stuff
  • The macro to iterate over objects in a persistent store is doclass. Why not doobjects? I guess that's a picky thing but when I see doclass I think I'm going to be iterating over, well, classes.

Kvetching aside, AllegroCache looks to be a sweet and powerful product. I hope that Franz realy does work on making it accessible to more Lispers by including it in their non-professional versions and by working to make it run outside of Allegro.


Design irony
Monday, February 6, 2006

I came across DSpace today while looking at CL-Semantic. DSpace includes this spiffy diagram made by a company called Dynamic Diagrams. They have a newsletter. Overall, Dynamic Diagrams seems like a cool company with some good ideas but I personally find their newsletter format very traditional and both non-dynamic and non-diagrammatic. It just looks like a bunch of words to read and I already have more than enough of that.


Interesting Hack for MCL on Dual Processor Macs
Monday, February 6, 2006

My friend Joshua related the following hack to get Digitool's MCL to use both processors on a dual processor machine. Running two different images under the same user account doesn't seem to be enough. Both MCLs share a single processor and each gets about 45% of that CPU. To convince MCL to use both processors, you can:

  • Open an MCL image in user A's account and start an image
  • Hot swap to user B's account
  • Open another MCL image and start up another image.
  • Hot swap to user C's account
  • Log out of user C.
  • Now A and B's processes are running in the background.

After this incantation, user A's MCL stays around 100% (95% - 106%) and user B's MCL maxes out around 55% - but that's with top running so I'd say it probably maxes out around 65% or so.

Oh the things we do for power.


One, two, many
Sunday, February 5, 2006

I'm pretty tired tonight, so I can only hope I'm coherent!

One of my personal design-issue favorites is the switch between things that are zero/one and things that can be two and things that are many. I've been pushing Tinaa a tiny bit lately because I want to document all the software I'm trying to corral. Tinaa started out with a very simple model:

  • the thing you want to document is a part
  • parts have sub-parts of different kinds (some of which get documented separately)
  • each part with documentation gets a page to itself (plus various summary pages)

One restriction was that a given part only had only set of sub-parts of any given kind. For example, a class has method sub-parts and slot sub-parts. The first iteration of Tinaa also displayed class sub-classes but the restriction meant that I could not display both sub-classes and super-classes (or direct and non-direct methods or direct and non-direct slots and so on). Changing this wasn't all that hard but it was a switch from one to many, it took more work than I thought it should have -- more redesign that I had hoped.

The next Tinaa restriction to go will be the one that links parts up with a single page describing them. These need to go because it restricts how I want to describe packages; currently, you can ask Tinaa to build you documentation for either the internal symbols or the external symbols but what I think you really want (most of the time) is to document everything but include two tables of contents: an external one and an everything one.

This is another one to many change so we already know that there will be a new level of indirection (we move from a pointer to a thing to a pointer to a list that has pointers to things). It's an interesting switch because it occurs often (and often remains surprising though perhaps it shouldn't) and yet we don't just start with these extra layers of indirection because we often don't need them... Writing this out is making me wonder if there is room for some simple language constructs here... That, however, will need to wait until tomorrow.


We're all better than we think
Friday, February 3, 2006

Brian Mastenbrook points out that most code isn't nearly as bad or as unmaintainable as the original author thinks.

They feel the code is not ready for public distribution because it's not good enough. No code is, once it's subjected to real users and their requirements. The adaptability of the codebase is what matters.

and also mentions that good testing and interactive debugging facilities make maintenance much more feasible -- the break loop is such a win!

I try to leave my ego at the door (which isn't easy because I sometimes have the self esteem of a gnat and my ego seems like the only thing keeping me upright!) because it's easy to make mistakes (and if you're not making them, you're probably not doing anything very interesting or challenging for yourself).

I think it's important to remind ourselves that we're more than what we do, to breath, to laugh and to enjoy life and Lisp.

(I'm also going to congratulate myself in advance for remembering to flick the switch that stops the brackets from getting quoted before I post this and notice -- or worse, have someone else notice -- that I screwed up... again!)


CL-Graph has much more documentation
Thursday, February 2, 2006

One small step for Tinaa and one small step for CL-Graph. Tinaa also has a purtier CSS style sheet and several small improvements.


A (small) shout out to the SBCL developers
Monday, January 30, 2006

The SBCL downloads page just got a little bit better: it shows the version you'll be getting when you click. That's a little thing but it makes it easier to see what you're doing and I'm all for it! (Though I think shrinking the font a tab would be a Good-Thing).


On having a cold
Saturday, January 28, 2006

I have a typical New England winter cold... and have spent the day snuffling and petting my cats. Aside from the sniffles, sneezing, and lack of physical energy, however, I feel pretty well and have taken the time to try and catch up on my organization, etc.

The big achievement of the day is that I've moved my various websites to an (almost) table-less design. This is supposed to be a good thing so I guess I feel good about it. The only thing I'm confused about is that I have two DIVs, one of which is supposed to take up 14% of the width and the other is supposed to take up 80%. When the page gets a bit narrow, however, the layout engine (in Safari, Firefox and Omniweb (my favorite -- just look at those cool tabs)) is placing the DIVs so that they are on top of one another rather than always next to each other... I'm not sure what the scoop is there but it's table-less so it's good <smile>.

I've also started to look at some of the recent patches I've been sent for ASDF-Install. I'm still testing things but you can find out how to check out the latest unstable version via Darcs if you're interested.


The ever illogical logical pathnames
Friday, January 27, 2006

Back when I was first putting some of the metabang software on-line, I ran into problems with SBCL complaining about #\.s in my path names. I figured it was my bad since I generally use MCL and MCL is pretty lax about things like this... So I converted some of my periods to dashes and ran off.

Yesterday, however, I was looking at Peter Seibel's Markup code and SBCL again complained about the periods in "com.gigamonkeys.markup". I was surprised that others hadn't noticed this but then Peter quickly pointed out that the problem was due to the path names having become logical somewhere along the way and why was that (book authors seem to know the standards, it's very irritating!). Of course, it was my bad again because I was pushing logical path names onto my asdf:*central-registry*. Once I converted to physical path names, happiness reigned.

Is this, however, a good thing?

It's certainly surprising and probably unnecessary. Like or loathe them, Logical Path names are part of Lisp and some percentage of folks are going to use them -- I personally think that they are kind of cute and cuddly. It would be easy to have ASDF ensure that any logical path names it was handed got transformed into physical ones. This would, it seems, paper over the problem and I don't think it has any down side... (of course, I'm currently considering renaming unCLog to "my bad" so it's likely that my milage my vary...).

Thoughts? (Aside from when I'm going to get a real weblog that has comment and such) are always welcome.


Not announcing: CL-Markdown
Tuesday, January 24, 2006

I'm not announcing CL-Markdown because there's no there there. I did, however, get a few questions about it after I mentioned it earlier so here's some why and wherefore.

There are already many markup languages (here and here come to mind) and lots of HTML generators. Markdown is John Gruber's text-to-HTML conversion tool for web writers.

The overriding design goal for Markdown's formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions. While Markdown's syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdown's syntax is the format of plain text email.

I like what Gruber has done in terms of making a "language" primarily for writing and reading that can also be converted to HTML. It's also nice that it's a "standard" (of sorts) and that there are tools that let you go from HTML back to it. I've been meaning to write a CL-Markdown for quite a while now and since I currently have many other more pressing things to do, i thought that now was a good time to start hacking at it -- ok, so that wasn't completely sane. Whatever.

I started looking at both the Perl and Python source and then decided I didn't understand Perl or Python well enough to translate easily and that the problem didn't seem that hard... (famous last words, right!). So what I do now is read in lines of input and chunk them into what is (roughly) the correct block structure. Then I go back over the blocks and handle the spans. My current output is a "document"; document is a list of chunks; a chunk is a list of lines and a line is a list of strings and conses representing markup. Chunks have types and levels (e.g., bullet, level 1 or blockquote, level 3). I'm currently rethinking this approach some because putting the linear structure back into its tree form is being more a pain that I expected... I also want to use Peter Seibel's HTML and PDF backends...

I'm probably going to have to let discretion be the better part of valour and drop the project for a week or two in order to finish some real work. Still, I'm pretty happy with how quickly it seems to be coming togehter...


Too busy
Tuesday, January 24, 2006

I've been busy with my putative day job and have been spending spare moments looking at test frameworks and mucking out a CL-Markdown. More work on ASDF-Install-Tester will appear eventually.


Lisp is sin...
Tuesday, January 24, 2006

I only skimmed lisp is sin when it first appeared. There are some interesting threads in the responses to it on Lambda the Ultimate. They include some notes on:

  • original versus maintenance code
  • languages for smart people versus languages for all people
  • the use and abuse of macros

It's big and it's prime
Tuesday, January 24, 2006

What's 9,152,052 decimal digits (big link) long and prime all over? The 43rd Mersenne prime.


Lots of minor metabang software changes...
Friday, January 20, 2006

I've updated lots of bits and pieces of my metabang.com open software. I'm not as organized at Pascal Costanza (who also just released lots of very nice sounding changes to the Closer to MOP project) so I can't easily summarize everything I've done. Here are a few high-lights; if you're curious, you can find details on each projects change log page.

  • ASDF-Binary-locations: Improved documentation

  • CL-Containers: Integrating cl-variates and cl-containers using ASDF-system-connections; lots of minor package symbol issues. Also added file file-iterators to system and improved hash-table compatibility.

    Fixed incorrect calls to add-parameter->dynamic-class (need to get my test suite running again!)

    Changed first-item and last-item to first-element and last-element (but kept first-item and last-item around for now). Also made first-item and last-item setfable.

  • CL-Graph: various random graph algorithms

  • CL-Variates: used ASDF-System-Connections to play better wtih CL-Graph (random graphs) and CL-Containers (sampling)

  • defsystem-compatibility: added hack for GBBOpen's mini-module system (not loaded automatically though)

  • LIFT: Added *lift-equality-test* to make ensure-same more flexible; improved printing control with :print-follow

    Started some work comparing different tools (look in the compare directory)

  • Metatilities: added (samep string string) method that uses string-equal

    the :export-p option wasn't doing anything in defcondition; now it is

    Added ccl: in several places b/c we can no longer rely on having used the CCL package.

    Package symbol magic (?!) to support cl-variates, cl-containers and cl-graph living happily together.

  • Moptilities: Added ignore-errors/ to remove-methods and remove-methods-if

    Added dry-run? to remove-methods and remove-methods-if

    Minor webpage fixes

    Added initial test system; test directory and tests

  • Tinaa: mostly reorganization; also switched to using my own copy-file routine because I wanted more flexibility and better agreement with with-open-file's keyword arguments (one less thing to remember).


The Frappr Lisp is growing
Friday, January 20, 2006

There are now 133-members (as of 20 Jan 2006). My humble state has four (though I'm currently the only one on the western end). I keep hoping to see Lispers that are close by.


Lisp echo chamber
Thursday, January 19, 2006

I hate to take part in the Lisp echo chamber, but Bill Clementson has an excellent point:

But ... the use of Lisp allowed them to develop products that pushed them to the front of the pack. They were subsequently bought out; however, the fact that their Lisp tools were subsequently discarded does not throw a negative shadow on Lisp. Lisp got them to where they needed to be to succeed - definitely a quality that entrepreneurs want in a programming language!

Now I've gotta go and get me some of that entrepreneur stuff everyone keeps talking about!


Path Finder 4 at last!
Wednesday, January 18, 2006

It's not spatial but it is very special. Path Finder 4 is released at last. It's much faster, has lots of interesting new features and well worth exploring.


Interesting logo
Tuesday, January 17, 2006

I liked Brian Mastenbrook's URL of brian.mastenbrook.net so I looked at gary.king.net. It's taken and the logo is a Kiwi (bird) with a crown. Go figure. Maybe I should grab gary.king.org while there is still time!


Switching hosts...
Tuesday, January 17, 2006

I'm in the process of moving from Westnic to A2 hosting. Westnic has met most of my needs (which, admittedly, are slight!) but doesn't have ssh. This has made updating my site and my weblogs a bit more byzantine than necessary. A2 has ssh! Hello rsync!! In theory, there should be no disruptions. In practice? I don't know.


ASDF and test systems or How I spent my Sunday afternoon
Saturday, January 14, 2006

(update: you should probably see this updated note)

I've been noodling around setting up tests for my ASDF systems. I'm using LIFT because that's my unit testing framework. here is an example system definition:

(defsystem moptilities-test
  :components ((:module "test"
                        :components ((:file "tests"))))
  :in-order-to ((test-op (load-op moptilities-test)))
  :depends-on (moptilities lift))

The only unusual part of the definition is the :in-order-to. It's a normal ASDF clause that can be read as "in order to perform test-op, first perform load-op on moptilities-test." ASDF already knows things like "in order to load, first compile" so that's why this clause isn't used all that often.

This definition seems OK but where do we run the tests? I've seen some systems write a custom perform method, as in:

(defmethod perform ((operation test-op)
                    (c (eql (find-system 'moptilities-test))))
  (describe
   (funcall (intern "RUN-TESTS" "LIFT") 
            :suite
            (intern "TEST-MOPTILITIES" "TEST-MOPTILITIES"))))

This does the trick but has to use that ugly funcall/intern thing. Worse yet, the perform method is only called the first time that one runs a test-op . An alternative to the funcall/intern bit is to put the call to run-tests in the tests.lisp file but this doesn't fix the one-time nature of perform. Generally speaking, we want ASDF to only do things once.. otherwise, we'd spend half of our lives recompiling code that hasn't changed. Testing, however, is a horse of a different color.

If we don't want to touch ASDF source code, we could add the following to our system definition file:

(defmethod asdf::traverse :around 
           ((operation test-op)
            (c (eql (find-system 'moptilities-test))))
  (let ((result (call-next-method))
        (perform-op (cons operation c)))
    (unless (find perform-op result :test #'equal)
      (setf result (append result (list perform-op))))
    (values result)))

Traverse is called by operate in order to figure out what needs to be done. Our :around method tells traverse to return what it usually would but also ensures that there is a call to perform the test-op on the system. We guard the append with the unless to make sure that we don't run the tests twice. Now that I had a solution, I looked a bit to find one that wasn't sure a hammer -- besides, it's bad form to mess with unexported methods!

At first, I tried messing with the times that ASDF records for when operations are performed. I thought that telling ASDF that test-ops were performed at time zero would suffice. This, however, was a dead end because ASDF also need to be told that the operation hasn't been done using the operation-done-p method. Thus, a simpler method for getting what I want is:

;; just my system
(defmethod operation-done-p 
           ((o test-op)
            (c (eql (find-system 'moptilities-test))))
  (values nil))
;; all test systems
(defmethod operation-done-p ((o test-op) (c system))
  (values nil))

This can either be just on my test-system (the first form) or on all test-systems (the second). My guess is that the latter is a good idea but there are probably other ways of getting tests set up so it is probably better to keep things local. My final system file (minus comments, package definitions and such) looks like:

(defsystem moptilities-test
  :components ((:module "test"
                        :components ((:file "tests"))))
  :in-order-to ((test-op (load-op moptilities-test)))
  :perform (test-op :after (op c)
                    (describe
                     (funcall 
                      (intern "RUN-TESTS" "LIFT") 
                      :suite (intern 
                              "TEST-MOPTILITIES"
                              "TEST-MOPTILITIES"))))
  :depends-on (moptilities lift))

(defmethod operation-done-p 
           ((o test-op)
            (c (eql (find-system 'moptilities-test))))
  (values nil))

and that's a pretty happy ending.


clisp append problem work around
Saturday, January 14, 2006

I heard from the CLISP mailing list that you can use :buffered nil to work around the append problem I mentioned a few days ago. I'm sure that there will be a complete fix soon but this is good enough for me and ASDF-Install-Tester.


Cool pictures of a hot volcano
Saturday, January 14, 2006

Augustine is erupting.


Problems appending in clisp
Wednesday, January 11, 2006

Before my recent hardware conniptions, I was tring to get ASDF-Install-Tester to run in clisp on OS X (or perhaps on clisp in OS X... under OS X?!). Unfortunately, I ran into some pretty bizarre behavior trying to append to existing files. Here is a test case on the off chance that someone reading this is also running the Darwin Ports version of clisp 2.37 under OS X.

When I run the following code in my version of clisp (complete version details here)

(in-package common-lisp-user)

(let ((working (user-homedir-pathname)))
  (with-open-file (s (make-pathname
                      :name "talk"
                      :type "tome"
                      :defaults working)
                     :if-exists :append
                     :if-does-not-exist :create
                     :direction :output)
    (format s "~%Hello"))

  (with-open-file (s (make-pathname
                      :name "talk"
                      :type "tome"
                      :defaults working)
                     :if-exists :append
                     :if-does-not-exist :create
                     :direction :output)
    (format s "~%Goodbye")))

The file created looks like this (using od -c)

[billy-pilgrim:~] gwking% od -c talk.tome
0000000   \0  \0  \0  \0  \0  \0  \n   G   o   o   d   b   y   e
0000016

And thanks just not right. My best guess is that someone figured removing all the ones from the first part of the file would make seeking to end much faster (kind of like removing all the hurdles from a race course...). <smile>


Maintaining ASDF-Install
Wednesday, January 11, 2006

I just added some CLiki pages to track ASDF-Install bugs and enhancement requests. If you've got an idea or a gripe, let your fingers flow...


Hardware...
Wednesday, January 11, 2006

My aging Apple Powerbook took the new of the new Intell MacBook badly and decided to make me spend the day mucking with fsck, Firewire drive mode, DiskWarrior and other fun stuff.

What irritates me about this sort of thing is that many -- if not most -- of the lessons learned become useless almost as quickly as they are learned. It's a little like getting a new roof on your house. You spend a lot of money and you have a new roof?! Yes, you need the roof but, at the end of the day, there just isn't anything exciting about it.

Now, back to work.


On being stupid in public
Tuesday, January 10, 2006

As several people have already pointed out, I was being silly and letting the surface similarity between constantly's implicit definition and the lambda form confuse me. In the expression with constantly, the *a* is evaluated so that the function constantly can be called. Constantly, in other words, never sees *a*, it just sees 1.

Man, yesterday Bill Clementson talks me up (thanks by the way); today, I stick me foot in my mouth. It didn't even taste good. Oh well. Them's the blogging breaks.


Something to worry about (but not much)
Tuesday, January 10, 2006

(Update, don't read this. I make a minor fool of myself and now I feel embarrassed).

What does:

(let* ((*a* 1)
       (f (lambda (&rest args)
              (declare (ignore args)) *a*)))
  (print (funcall f))
  (let ((*a* 3))
    (print (funcall f)))
  (values))

print? If you guessed "1, then 3" then you were correct. How about:

(let* ((*a* 1)
       (f (constantly *a*)))
  (print (funcall f))
  (let ((*a* 3))
    (print (funcall f)))
  (values))

If you guessed "1, then 3" then you disagree with every Lisp I've asked. I've tried this with and without optimizations and assume that the compilers are being clever with the constantly (even though CLTL2 says that constantly can be defined as:

(defun constantly (object)
   #'(lambda (&rest arguments) object))

I don't know the standard backwards and forwards well enough to say that one answer is better than another. Does anyone else know chapter and verse?


Now that's fast: Hyperdrive, here we come!
Tuesday, January 10, 2006

From Science in the News

All Hail Hyperdrive: New-Old Idea Attracts Publicity, Top-Secret Attention

Obscure German physicist Burkhard Heim theorized ways to reconcile quantum mechanics with Einstein's general theory of relativity. In the early 1950s, Heim started to rework Einstein's equations [and] wound up with a theory of six dimensions, where gravity and electromagnetism are linked, allowing the conversion of energy from a gravitational to an electromagnetic state and back again.

... [One consequence is] the idea of the hyperdrive engine ... It could zip a spacecraft and humans aboard it out to Mars in just three hours. The craft could presumably eat up in 80 days the 11 light-years that exist between Earth and a distant star. That certainly sounds like science fiction, so stay tuned. Even testing is a long way off...

Wow!


Change Logs
Monday, January 9, 2006

I've modified my website builded (based on LML2) to include Darcs change logs. Here is the one from CL-Containers. It's not as nice as having actual version numbers on my ASDF packages, but it's a start.


I feel almost famous
Monday, January 9, 2006

Bill Clementson speaks kindly about ASDF-Install-Tester and ASDF-Status. I'm honored. On the down side <smile>, I now feel compelled to finish automating the dependencies graphs!


I hope orange is still the new black
Monday, January 9, 2006

... because I just redid all of my common-lisp.net web pages and orange figures prominently.

Not to worry, though because I'm grokking the style thing and can change all easily the next time around.

I'm pretty happy with CSS (not that I heard anyone asking for me opinion <smile>); those web standards folks did a pretty nice job and most of the browsers seem to be getting it too!

Next time (which I hope will be a while from now; web-tweaking takes too damn much time) I'll try to figure out tableless design.


At last - Services that are a service
Monday, January 9, 2006

I think that Apple's integrated Services menu is a great idea. Unfortunately, it's marred by a horrible implementation choice: the user isn't in charge of which services appear. My services menu is about a page long and is full of those little disclosure triangles that can be so annoying to navigate. I have dozens of choices on the menu that I know I'll never use (where did the Convert Chinese Text service come from even!?). Can I remove them? Well, yes. If I want to go and edit dozens of property files hidden in packages all over my hard drive. Frankly, it's never been worth the bother and so it became one those daily annoyances I thought I'd left behind in the windows world.

I'd been intending to write an application that fix this: scan the hard drive for services, let you edit which ones appeared in the menu, modify the property lists and save backups so that you could easily restore things. Lack of both time and Cocoa skills made this one of those ever receding propositions so it was with great joy that I found that Peter Maurer had created Service Scrubber. Peter writes great software for OS X. I use Witch and Textpander every day.

Thanks Peter.


ASDF-Install-Tester becomes AIT
Friday, January 6, 2006

ASDF-Install-Tester takes too long to type so its official name on Common-Lisp.net is AIT. AIT now has mailing lists (developer and announce) and I've added some pages to the CLiki to help it self organize, record bugs and wishes and explain how it's supposed to work.

ASDF-Status has also seen progress. There are now results from SBCL on X86 Linux (thanks to Humberto Ortiz Zuazaga) although I've lost Pascal Bourguignon clisp results -- two steps forward, one step back! Perhaps ore interestingly, I've broken down the results by author to make it easier to figure how whats happening.


Education again
Thursday, January 5, 2006

Dana Blankenhorn (now that sounds like a hard name with which to survive middle school!) opines on the question of education:

But it reminded me that both open source and closed source disciplines have a common problem. That problem is education, recruiting new blood into programming, and training it up so it becomes useful.

CS used to be popular because you could make lots of money on the internets. That bottom-up push is gone and there doesn't seem to be much top-encouragement in elementary, middle or even high-schools.

Places like the Python Bibliotecha may help but I think more leadership is needed.


Not so dangerous
Thursday, January 5, 2006

Dave Pollard correctly blasts this year's Edge answers as being more boring and naive than dangerous:

I was stunned by the blandness of the responses and the utter disconnectedness of respondents from the critical issues of our world today. From the social scientists, who are overwhelmingly from the so-called 'cognitive sciences', we get navel-gazing speculations on consciousness that are neither dangerous nor useful. From the technologists we get technophilia, muddle-headed blather about technology as religion and as the saver of the universe, dangerous only its naivety. From the real scientists we get shopworn retreads about the compatibility or incompatibility of science and religion. From philosophers we get starry-eyed dreaming about a new political order, a world where people suddenly stop behaving the way they do and start behaving responsibly. What planet do these people live on?

and he follows up with 10 ideas he finds dangerous. Good stuff.


Making pathnames, part two
Monday, January 2, 2006

I received several pathname creation strategies in the mail, all using merge-pathnames. Thanks to Andreas Fuchs, Zach Beane and Peter Seibel for their insights. The basic idiom is:

(defparameter *output-directory*
  (merge-pathnames
   (make-pathname
    :directory '(:relative "output")) *working-directory*)) 

Or, if CL-FAD is available:

(merge-pathnames (pathname-as-directory "output") *working-directory*)

This is nicer than my solution because it makes both Lisp and the programmer do less work. Three cheers for the Lisp community.


Making pathnames
Monday, January 2, 2006

I usually make path names by doing things like this:

(defparameter *output-directory* 
  (make-pathname :directory `(,@(pathname-directory *working-directory*)
                              "output")
                 :defaults *working-directory*))

Though it always feels more verbose than ought to be necessary. Hmmm, what I'd like to say is? What? (make-subdirectory-pathname *working-directory* "output")?

It's probably safe to assume that all file systems have a folder hierarchy? I suppose that might be false on embedded systems though. Thoughts?


Dojo guiding principles
Monday, January 2, 2006

I found these while reading about the JavaScript environment Dojo. They seem like damn good ideas to me. The first two are easy for me, the third much harder...

Reduce barriers to adoption.

Simply put, do not give users reasons not to choose your code. This affects everything from design to licensing to packaging.

Simple first, fast later

Make it simple to use first, make it fast when it's appropriate. Simple here means simple for users, not for us. ...

Bend to the constraints of your environment

Do not bludgeon a problem to death with code. If the environment can do most of something, let it. Fill in as necessary, but do not re-invent. Make the path smooth for users, but do not introduce your own idioms where they aren't required.

I think that the third is harder because of the usual "not invented here" syndrome that seems to effect many Lispers.


My new years projects
Sunday, January 1, 2006

Practically speaking, making them public may produce more pressure for performance:

  • As far as I know the CLiki is unmaintained. I'd like to take over the job.
  • Improve and automate ASDF-Install-Tester / ASDF-Status
  • Improve my personal infrastructure for development and testing
  • Get Round up (or something similar) running on Common-Lisp.net
  • Improve documentation and support for CL-Containers, Cl-Graph, and all the rest of that happy family.

That should keep me busy for a few months!


Interesting weblog on systems thinking and global warfare
Sunday, January 1, 2006

Global Guerrillas.


Why you have to try it
Sunday, January 1, 2006

Computer Scientist Eugene Wallingford links to an article from Michigan State University about quantum physics and the value of experiments. I'm often astonished at wrong my initial designs turn out to be as I start to sketch them in code (and I'm pretty sure I'm not alone). Humans are generally terrible at understanding the consequences of their actions and intentions. That's why bottom-up, interactive, test-driven, coders-are-designers, Lisp-like programming is the way to go whenever you're not sure exactly what you're doing and is usually still the way to go when you do.

BTW, Happy New Year.


The Curious Incident of the Dog in the Night-Time
Saturday, December 31, 2005

Like the Speed of Dark, the Curious Incident of the Dog in the Night-Time is told by an autistic. The dog-book, however, takes place today, in our world. Its protagonist, the fifteen year old Christopher Boone, is much more limited in his understanding of others but no less sympathetic and no less courageous.

We all navigate a world of things seen and unseen at whose import we can but guess. We all tend to think we know what is going on most of the time (as the old adage says, we attribute our failures to others and our successes to ourselves) and I'd guess we're all wrong more often than we know. Reading this book reminds us of how much we have learned to ignore, of how much goes on that makes no sense, of how human, all too human, we all are.

The book is funny, sad and deeply moving.


Woken Furies
Friday, December 30, 2005

In Woken Furies, Richard Morgan adds another nail to his dystopic Takeshi Kovacs universe. Though it ends with a ray of hope -- and perhaps a bit too much deus ex machina -- this is by far the darkest novel yet. Kovacs spends much of it far from his envoy calm on a raging vendetta each step of which seems to only pull him further from equilibrium. Like Morgan's social conscience, the science fiction and writing remain excellent. Recommended.


Good news for OpenMCL on Intel
Friday, December 30, 2005

Bryan O'Conner has good news for OpenMCL users worried about the up and coming Intell-igization of OS X. Thanks Bryan.


Somewhat pointless fun: ASDF-Install dependencies
Tuesday, December 27, 2005

I realized sometime yesterday that ASDF-Install-Tester was collecting all of the information I needed to build a ASDF systems dependency graph. Here is a portion of it:

The whole thing includes only the systems that I was actually able to download but it still provides a good representation sample of the ASDF-Installable package universe. FWIW, here is the code I used to make the dot file that I gave to GraphViz.

(in-package asdf-status)

(defparameter *mcl-systems*
  (remove-if
   #'atom
   (mapcar 
    (lambda (f)
      (with-open-file (in f)
        (handler-case
          (aprog1 
            (read in nil nil)
            (when (atom it)
              (warn "Parse error: ~A" f)))
          (error () (warn "Read error: ~A" f)))))
    (directory (make-pathname
                :name :wild
                :type "ait"
                :directory `(,@(pathname-directory *input-directory*)
                             "openmcl"
                             :wild-inferiors)
                :defaults *input-directory*)))))

(defparameter *dependent-systems*
  (remove-if 
   (lambda (system-info)
     (null (getf system-info :depends-on))) *mcl-systems*))

First, we read in all the system information that ASDF-Install-Tester saved in little AIT files. I just grab the OpenMCL files and filter out anything that doesn't look correct (because of ASDF-Install-Tester bugs, I suspect) and anything that has no dependencies.

(defmethod coerce-system-name ((name string))
  (string-downcase name))

(defmethod coerce-system-name ((name symbol))
  (coerce-system-name (symbol-name name)))
                              
(defparameter *systems-graph*
  (let ((g (cl-graph:make-graph 
            'cl-graph:graph-container :vertex-test #'equal)))
    (iterate-elements *dependent-systems*
     (lambda (system-info)
       (let ((system (coerce-system-name 
                      (getf system-info :install))))
         (cl-graph:add-vertex g system)
         (iterate-elements
          (getf system-info :depends-on)
          (lambda (other)
            (cl-graph:add-edge-between-vertexes
             g (coerce-system-name other) system
             :edge-type :directed))))))
    g))

#+Export
(cl-graph:graph->dot *systems-graph* 
  "user-home:docs;foo.dot"
 :vertex-labeler
 (lambda (v s) (princ (element v) s))
 :vertex-formatter
 (lambda (v s) 
   (format s "URL=\"http://www.cliki.net/~(~A~)\""
           (element v))))

Then we can build the graph and output to a dot format.


CLisp (or is that clisp)
Monday, December 26, 2005

ASDF-Install-Tester / ASDF-Status update:

  • There are now results for CLisp (thanks to Pascal Bourguignon).
  • Improved SBCL results thanks to Christophe Rhodes pointing out that I was getting ASDF-Install errors for things that come included with the distribution.
  • More packages included because I've added dependencies on cl-html-parse and trivial-http and used them to download the latest list from the CLiki.
  • The colors I'm using are even more horrid. I need to get a better scheme and make sure that things are OK for the color blind.

It's still a clunky monkey but one small step for an giraffe and so on.


Education
Sunday, December 25, 2005

The story is about the lack of women in American Computer Science programs.

The US economy is expected to add 1.5 million computer- and information-related jobs by 2012, while this country will have only half that many qualified graduates, according to one analysis of federal data. Meanwhile, the subject is becoming increasingly intertwined with fields ranging from homeland security to linguistics to biology and medicine.

As an American, I find that pretty scary. Maybe if we can get everyone to use Lisp, Scheme, ML, etc, then productivity will rise enough that we'll need less programmers and IT people. Maybe not.

I wonder what the situation is like in Europe.


Network effects multiply
Saturday, December 24, 2005

Dylan Evans wonders if we might be coming to a big crunch. Given over-exploitation, over-population, general stupidity among governments, powers and people, and the end of cheap oil... I wonder how it can be avoided.

We now return to our regular Holiday bonhomie.


On Plug-ins and Extensible Architectures
Saturday, December 24, 2005

As I've been looking at ASDF-Install and Lisp libraries, I've been wondering how to get it to all hang together withing every one handing separately (thanks to Ben Franklin for that one!).Dorian Birsan provides an overview of the Eclipse plug-in architecture which is new and improved:

In the new pure plug-in architectures, everything is a plug-in. The role of the hosting application is reduced to a runtime engine for running plug-ins, with no inherent end-user functionality. Without a directive hosting application, what is left is a universe of federated plug-ins, all playing by the rules of engagement defined by the framework and/or by the plug-ins themselves.

He does a nice job limning the pros (flexibility, customization, framework-based) and cons (security, installation nightmares, version inconsistencies) of plug-in systems and concludes that Eclipse has done many things correctly and that much work remains to be done. Decent reading for the winter.

Not knowing anything about it, I wonder how Fink, Debian, Linux and all that manage things... Next stop: google.


Another day, another few improvements
Saturday, December 24, 2005

ASDF-Install-tester now works with Allegro and, drum roll please, SBCL. The Allegro 'problem' had nothing to do with Allegro and everything to do with the difference between rm -r and rm -rf. SBCL, on the other hand, took some doing. ASDF-Install-Tester didn't like running with the version of ASDF-Install bundled with SBCL and SBCL didn't like the cross platform version of ASDF-Install. A few (very minor) patches fixed the later but then I ran into other troubles with trying to share the same version of ASDF-Install amongst my three Lisps (FASL incompatibilities). ASDF-Binary -Locations fixed some of these issues but there is still room for improvement.

On other fronts, I've improved the generated HTML of the status pages somewhat and corrected the missing error output (thanks to Christophe Rhodes for noticing that one!) but not the encoding problem John Wiseman noticed -- soon).

The next step is, I think, to write better documentation and ask for help! I don't have a Windows box or a Linux one and I'd like to include these in the chart. If anyone wants to volunteer some time, please let me know!


Frapper makes me sad
Friday, December 23, 2005

I just joined Frapper so that I could add myself to the Lisp map. When I first logged in, however, Frapper told me that:

You have no friends

I was so heart-broken that I had to write this blog entry instead.


ASDF-Install-tester under Allegro 7.0 (OS X)
Thursday, December 22, 2005

Check out the ASDF-Status page to see (partial) results for Allegro 7.0 on OS X. As people over in CL-Gardeners have pointed out, different Lisps behave, well, differently. I'm not sure why the results for Allegro are partial. The test for cl-package-aliases under Allegro runs fine but then Allegro doesn't appear to quit.

Today's mini-project is thus: timeouts (and maybe SBCL).


Word, Microsoft, Clueless
Wednesday, December 21, 2005

I just lost whatever respect I had for Bill Buxton:

Because Microsoft is such a large company, our perception is dominated by what we see in the core products like Office and Word, and we forget that much of the Macintosh experience is based on those products. Make sure we remember that.

Anyone who thinks Word on the Macintosh is an example of good user interface or somehow elevates the Macintosh to higher levels (as opposed to people's blood pressure) needs to get out more. Look at Pages, look at Nisus Express. Look at Keynote compared to PowerPoint. Microsoft Word is the most horribly designed product I've ever used.


Pretty web site (?) - ASDF-Status
Wednesday, December 21, 2005

I've borrowed some code from Peter Seibel and used LML2 to write a bunch of data munging / HTML generating code in the services of producing a slightly extended version of John Wiseman's ASDF Install status:

Aside from showing off my less than stellar HTML and CSS chops. This site provides an picture of where I'd like to see ASDF-Install-Tester go. What I'd really like to do is integrate some HTTP Posting into the tester so that each test can be accumulated (compare with XBench). This would make it really easy to see the status of each project evolve over time.

I should mention SBCL's platform support page and Bill Clementson's weblog as source of inspiration and CSS magical incantations!

(update: thanks to the thousand eyes of #lisp for catching my all too common typos!).


ASDF-Install-tester progress?
Tuesday, December 20, 2005

ASDF-Install-tester is now nominally cross-implementation. The trouble is that SBCL still uses it's own brand of ASDF-Install and my code doesn't like it. I tried using the ported version of ASDF-Install but SBCL doesn't like that. Things do seem to work under Allegro (version 7.0) but I need to tweak a bit before things are completely happy. Finally, the whole thing has a tendency to hang for reasons I haven't been able to fathom. I know how to deal with timeouts in OpenMCL but need to learn about how to invoke them in SBCL, Allegro, etc.

Actually trying to deal with all this cross platform stuff gives me a greater sympathy for people that complain about Lisp. Doing things like deleting directories or handling timeouts shouldn't -- it seems to me -- require me to start downloading libraries, etc.


A Brief Political Digression
Tuesday, December 20, 2005

If the Lispmeister can do it, then so can I. American is supposed to be a government of Laws. Let's keep it that way.


ASDF-Binary-Locations
Tuesday, December 20, 2005

I liked the code so much, I stole it.

I've taken Björn Lindberg's code and code from SLIME, put them together and added a little bit of love to create yet another ASDF Extension: ASDF-Binary-Locations (ABL for short). It makes it even easier to put your binaries where you want them.


Beginner's mind
Tuesday, December 20, 2005

L'affaire Reddit points out the importance of maintaining beginner's mind. As we learn a new environment, we also learn what to avoid: what steps not to take, what things not to do, what events not to expect. Over time, this learning finds into the background and we no longer notice the rough edges because we no longer encounter them. For us, it's as if they are not there.

The great thing about Reddit and about the new energy bubbling up around Lisp (or should I say sprouting?) is all of the new energy and attention being paid to Lisp's many rough edges. If we're really lucky and try hard, we may come out at the end of the day with a Lisp language and community that won't make this scenario so common.


Not on the list but ... ASDF binary locations
Monday, December 19, 2005

An ASDF FAQ is "why can't I specify the location of the binaries?". The main reason for this is that this is a job for the site to specify, not the system definer. Though that answer is right on the money, it leaves unsatisfied the question of what to do if you are the site!

Thanks to Google, we can find a good answer in mere moments. Thanks to the CLiki, we can stick that answer where it might be slightly easier to find. I figure that even if everyone else remembers this code, it'll still help me when I forget again how to do this!


What's next?
Monday, December 19, 2005

In spite of the fact that I've read Getting Things Done 9-million times, my personal organizational systems never quite seem up to snuff. Actually, they usually don't even seem up to dish gathering under the bed. In an effort at group think or maybe confession (!), here are some of the tasks I want to tackle next.

  • Add web-output to ASDF-Install-Tester so that it's easy to create nice tables showing every system on every platform and its status
  • Work with ASDF-Install versioning and the never quite completed asdf-install:update command.
  • Improve Tinaa with lots and lots of love
  • Complete a report on Common-Lisp library status (inspired by Paul Dietz).
  • A million other things.

I've also got a closed source project under way that needs attention, several proposals to write and --gasp -- real work to complete.


Another wack at ASDF-System-connections
Monday, December 19, 2005

I've updated ASDF-System-Connections and think I may finally have gotten it right!? I'm feeling able as thick as molasses -- is that too cliche to say now? -- but after far too many attempts and minor edits, ASDF-Install-tester loads all of my systems happily. You know what they say: "If ASDF-Install-tester is happy, I'm happy."


Another wack at ASDF-Install-tester
Monday, December 19, 2005

I just put a bit more polish to ASDF-Install-tester. The most important change is that instead of mucking with one of your ASDF-Install directories, it does all its work in a temporary directory that you specify. I use ~/temporary but there are probably better default choices. I've also done a bit of work to reduce the dependencies on OpenMCL. I'll probably finish that bit over the next week or so. Finally, I give it its own page and altered its CLiki page to point at that.


Moptilities is now Closer to the MOP
Saturday, December 17, 2005

As I mentioned earlier this week, I've been rebuilding moptilities on top of Closer to MOP. I've finished now and also made Pascal Costanza's projects ASDF-Installable (see here, here and here). The only difficulty was that Pascal used strings instead of symbols in his ASDF system definitions and ASDF-Install doesn't like that (even though it's valid from ASDF's stand point)!

I've sent Pascal Darcs patches and the amazing Edi Weitz has already patched ASDF-Install. Once Pascal apples the patches, I'll rebuild the ASDF files... If you do run into problems, you can install lw-compat first (via (asdf-install:install 'lw-compat). Once that is there, the other packages should install fine (thanks to Kevin Reid for pointing this out).

Now I need to sit back and decide what the next step is...


Tinaa for Allegro
Saturday, December 17, 2005

I just finished getting my trial version of Allegro 7.0 to properly load and run Tinaa (which is decent test of metatilities, moptilities, and cl-containers too). It's not perfect -- there are several implementation specific things I need to track down -- but it is another step towards world domination.

Next up, SBCL.


Darcs and ASDF-Install
Friday, December 16, 2005

I use Darcs for version control and ASDF / ASDF-Install for Lisp system definitions. Since I have a bunch of packages I maintain, I've been working to automate things as much as I can. I figured I'd document what I do in the hopes that people can help me improve the process or copy it or whatever <smile>. I run under OS X and my Unix skills are mediocre so YMMV. To keep my ASDF systems up to date, I use two shell scripts. The first is make-all-asdf-systems

#!/bin/sh
make-1-asdf-package "asdf-system-connections" "asdf-system-connections"
make-1-asdf-package "asdf-install-tester"     "asdf-install-tester"
# and so on..

As you can see, it just calls make-1-asdf-system for each of the systems I worry about.

#!/bin/sh
### to do
# More error checking

### Command arguments
# Argument #1 is the source
# Argument #2 is optional. If not supplied, the basename of the target is
#  used as the project root on common-lisp.net. If supplied, it is used
#  as the 'root' on common-lisp.net in cl-containers...

tempDir="$HOME/temporary"

if [ -z "$1" ]; then 
	echo "Must specify source"
	exit 1
else
	if [ -z `basename $1` ]; then
		source="$1"
	else
		source=`basename $1`
	fi
	if [ `dirname $1` = "." ]; then
		sourceDir="$HOME/darcs"
	else
		sourceDir=`dirname $1`
	fi
fi

if [ -z "$2" ]; then
	target=$source
	tarPath="gking@common-lisp.net:/project/$target/public_html"
else
	target=$2
	tarPath="gking@common-lisp.net:/project/cl-containers/public_html/$target"
fi

PASSWORD=`cat ~/.ssh/goomber`
SOFTWARE="${source}_latest"
#SOFTWARE="$source_$VERSION"
echo "Making $SOFTWARE"

pushd . > /dev/null
cd $tempDir

# Cleanup existing stuff (this script leaves some of it behind just in case...)
if [ -f $source ]; then
	rm -r $source
fi
if [ -f $SOFTWARE.tar.gz ]; then
	rm $SOFTWARE.tar.gz
fi
if [ -f $SOFTWARE.tar.gz.asc ]; then
	rm $SOFTWARE.tar.gz.asc
fi

# make new one
darcs get $sourceDir/$source
rm -r $source/_darcs
if [ -f $source/version ]; then
	VERSION=`cat $source/version`
	echo $VERSION
fi
tar -cf $SOFTWARE.tar $source
gzip $SOFTWARE.tar
echo $PASSWORD | gpg --batch --passphrase-fd 0 -b -a $SOFTWARE.tar.gz
rm -r $tempDir/$source


rsync \
	--archive \
	--rsh=ssh \
	--compress \
	-v \
	$SOFTWARE.tar.* \
	$tarPath
popd

That's more like it!

Most of my systems are in ~/darcs but I working with a few that are elsewhere. The first bit of the script differentiates these two cases and fills in source and sourceDir. The next bit worries about whether the project had it's own place to live on common-lisp.net or if it lives under cl-containers (for lack of a better place). This lets me fill in target and tarPath.

So that I can automate this, I store my password in ~/.ssh/goomber and suck that in with cat. I'm planning on doing some fine and replace Perl magic to keep versions up to date but I don't do that yet. Once all the shell variables are set, we do a bit of cleanup and then it's time to get to work.

Working consists of a Darcs get to grab the latest code, removing the Darcs information (stored in _darcs), tarring it up, and rsync'ing it to common-lisp.net. The trickiest part for me was figuring out how to get gpg to work with manual intervention. I also use SSH-Agent so that I don't need to log into common-lisp.net when I rsync.

So there it is. Nothing too fancy but it serves my needs so far.


2004 Timeline
Thursday, December 15, 2005

I just noticed Zach Beane's 2004 Lisp timeline. It's very cool to see all the activity in my favorite language. I wonder what 2005 will look like in retrospective.


Moptilities face lift / body transplate
Thursday, December 15, 2005

I've just posted the new documentation for moptilities now that I'm almost done rebuilding it on top of Closer to Lisp. Since I had to rework things in any case, I took the opportunity to change names and munge everything six ways from Sunday (is that Saturday?). I never liked all my stupid "mopu-" this and "mopu-" that in any case and it was fun cleaning it up.

The next steps before I put the new code up are to write some tests (using LIFT) and getting Closer to MOP ASDF installable.


ASDF-Install, or not
Wednesday, December 14, 2005

ASDF-Install may not be perfect, but John Wiseman rocks!

Making ASDF-Install better is why I wrote asdf-install-tester. It's very beta, currently only runs under OpenMCL (and probably only under my setup) but I think it has legs.

FWIW, one of the next things on my lisp list (say that 10 times fast) is to automate the asdf-install-tester some more and include output to the Web. If I set things up in a safe sandbox, I ought to be able to get a daily update of what projects build and which ones don't. I think that that would be handy.


Closer, closer
Wednesday, December 14, 2005

Pascal Costanza recommends the obvious of dispatching on class instead of standard-class to fix the recursion I ran into last night. My silly.


I wish I could get someone to pay me for inane comments
Tuesday, December 13, 2005

According to a new BusinessWeek column Apple may be holding back the music business.

Um, I thought that Apple (practically) created the on-line music business qua business.

I figure using "qua" in a post will give me more cachet and bring in extra bucks. I've heard that added bits of French can do the same thing, n'est pas?


Closer to MOP / Moptilities conundrum
Tuesday, December 13, 2005

I've started to rewrite moptilities with Closer to MOP as a firmer foundation and I've run into a conundrum. Here is the definition of class-precedence (that's not a great name but please ignore that for now <smile>). It is equivalent to the MOP function class-precedence-list but for convenience's sake you can pass it an object or a symbol naming a class.

(defgeneric class-precedence (class) 
  (:method ((class standard-class))
           (finalize-class-if-necessary class)
           (class-precedence-list class))
  (:method ((class symbol))
           (class-precedence (find-class class)))
  (:method ((class standard-object))
           (class-precedence (class-of class))))

Note that the moptilities package now uses closer-common-lisp rather than common-lisp and all of the various MOP packages from difference implementations. This means that the standard-class symbol in the above definition is c2mop:standard-class. If I call #'class-precedence with a symbol, it calls find-class which returns an instance of <lisp>:standard-class (not c2mop:standard-class). The recursive call then matches on the standard-object method -- not, as you'd naively expect, the standard-class method -- and we end up in an endless recursive loop.

Here's my first thought at fixing things:

(defgeneric class-precedence (class) 
  (:method ((class standard-class))
           (%class-precedence class))
  (:method ((class symbol))
           (%class-precedence (find-class class)))
  (:method ((class standard-object))
           (%class-precedence (class-of class))))

(defun %class-precedence (class)
  (finalize-class-if-necessary class) 
  (class-precedence-list class))

this prevents the recursion but now calling with a symbol causes a recursive call with a standard-class (which still ends up running the standard-object code) and this calls %class-precedence with the (class-of class) which is a standard-class and that isn't what I wanted.

Maybe there some easy way around this that doesn't involve a bunch of #+'ing but I'm not seeing it. Any hints the cosmic unconscious wants to send my way are welcome.


Aibo keeps growing up
Tuesday, December 13, 2005

Honda's Asimo (not Sony's) is at version two (with video)


Microsoft Monad (MSH) under the hood
Tuesday, December 13, 2005

Ryan Paul wrote an extensive overview of Microsoft's forthcoming new Windows shell for Ars Technica back in October. I finally got around to reading it this afternoon. In short, it looks as Microsoft may have a winner here.

It's interesting to see the different strategies each camp (Microsoft, Apple, Linux as a whole) is taking. I'm not knowledgeable enough to say anything about Linux but Apple's Automator and Microsoft's Monad are heading in very different directions: Apple is adding a thin veneer to existing functionality whereas MS is redoing the whole shell thing with objects taking the place of text.

Both techniques are important for making computers easier to use but I hope that Monad's ideas get grabbed by the other camps; it really looks like good stuff.


Why automate?
Tuesday, December 13, 2005

Because you're maintaining your good automation habits, and your good refactoring habits. And you're gaining experience with your automation environment. Next time I write an elisp function I'll be better prepared to deal with Emacs's regular expression syntax (and the myriad other little details I wrestled with during that hour.)

From a 2004 post on saving time by wasting it by Steve Yegge. It's a bit verbose but contains some good stuff nonetheless.


Will IVR drive remixable UI?
Tuesday, December 13, 2005

Jon Udell asks if Interactive Voice Response (IVR) will drive voice/data integration and remixable user interfaces:

[the demo] highlighted the notion of composable and remixable user interfaces. Instead of sharing your whole desktop, or a complete application window, you could share something as specific as an account-editing form. Why? More privacy for you, less clutter for the agent trying to help you.

That sounds improbable when you survey the fragmented GUI landscape: AJAX, Flash, .NET, Java. But common patterns do exist, and in each of these niches you can find one or more XML vocabularies to describe them: XUL, MXML, XAML, and others. Maybe it's a pipe dream to imagine a unifying standard in this space, but it's one that I can't ignore. So it was heartening to see that the W3C has taken up the cause.

Sounds hard: getting what you want without getting what you don't is tough. It also sounds very cool.


Where are the other 99 parts?
Tuesday, December 13, 2005

John Wiseman answers important questions while leaving open the greatest mystery of all: where are the other 99 parts in that patent diagram?!


Tinaa and KMRCL
Tuesday, December 13, 2005

As an example, I've added Tinaa documentation for Kevin Rosenberg's KMRCL most excellent utility collection. I'm thinking about automating things so that I can build Tinaa documentation for every ASDF-Installable library out there. After all, someone has to use up all that free space on common-lisp.net.


About that last post
Monday, December 12, 2005

I just realized that I had not actually recorded the Darcs patch before building new ASDF files to fix the problem I mentioned in ASDF-System-Connections. Man, I can sure be brain dead.

On another note, I just fixed an old Tinaa bug that prevented it from playing nicely with symbols that had a #\* or #\/ in them. I also realized that Tinaa was assuming that the logical host "tinaa:" was present. I fixed this and, in the process, pulled some defsystem-compatibility code into metatilities.


Metabang Infinite software loading bug fixed
Sunday, December 11, 2005

I finally managed to figure out the infinite loop problem in loading certain metabang systems. I believe that it is now fixed (and on the web, ASDF-Installable, etc.). The problem was in the asdf-system-connections extension. I had defined system-loaded-p as

(defun system-loaded-p (system-name)
  (let ((load-op (make-instance 'load-op))
        (system (find-system system-name nil)))
	(and system
	     (operation-done-p load-op system)
	     (null (traverse load-op system)))))

because I was under the impression that asdf::traverse was side-effect free. That was the wrong impression. A better definition of system-loaded-p is:

(defun system-loaded-p (system-name)
  (let ((load-op (make-instance 'load-op))
        (system (find-system system-name nil)))
    (handler-case 
	(and system 
	     (operation-done-p load-op system)
	     (null (traverse load-op system)))
      (error () 
	;; just treat any error as 'not loaded' for now...
	nil))))

My next goal is to redo moptilities on top of Closer to MOP.


Small services
Thursday, December 8, 2005

Dan Moniz announces Small Services.

Small services are small, independently published and maintained "services" available via people's websites, with a limited scope and minimal amount of needed formality to be automatically useful to simple programs, as well as humans.

Beautiful. Let a thousand flowers bloom!


Google ads
Thursday, December 8, 2005

I've finally joined the band wagon and added Google ads to unCLog. I'm sure I'll be able to quit my day job any day now.

Hmmm, or maybe Google thinks that I should change my day job!

Mr. Rooter! Fix clogged pipes! I guess I should have thought about that before I named my blog. They got it right later on most of the other pages:

Functional programming, OS X. Much better.


Broken Angels
Tuesday, December 6, 2005

Richard Morgan's sequel to Altered Carbon finds Takeshi Kovacs fighting treachery of many sorts in a world corrupted by by power, government and money. Aside from the advanced technology -- both human and remnants left behind by the mysteriously vanished Martians -- it's an existence much like ours. Broken Angels is as gripping as Altered Carbon and its plot is dense and compelling. I found it a bit muddier in places but still a wonderful page turner. I'm looking forward to reading Woken Furies.


ASDF-Install tester
Tuesday, December 6, 2005

ASDF-Install-tester automates the process of checking whether or not your ASDF-installable systems actually install under ASDF. To use it:

  • Use ASDF-Install to install ASDF-Install-tester.
  • Modify the file definitions.lisp to specify:
  • which systems should be tested (in the variable *systems-to-test*),
  • which systems should be removed before each test (in the variable *systems-to-remove-each-time*),
  • Your local ASDF install directory (in *local-asdf-install-directory*),
  • A working directory (in *working-directory*).
  • Start a lisp and ASDF load 'asdf-system-connections (see note below)
  • Finally, evaluate (asdf-install-tester::main)

As of today, this only works under OpenMCL but porting to other Lisps should be pretty easy. See the CLiki page for more details and links to get a tarball or access the Darcs repository.


File system magic
Friday, December 2, 2005

If I open a file in Apple's preview application, go and change the files name and then go back to Preview, the application manages to figure out the new file name and display it. Even better, if I open the file and then move it and change the name, Preview still updates correctly. If I rm the file (not just move it to Trash), Preview keeps the name as it was but it is "smart" enough to not add the file to the list of recently open items. The change only happens when Preview activates so it's either not using some sort of kevents queue or it is and is just waiting until it thinks it should actually do the work. Getting the fileschanged utility to run under OS X seems to be beyond my poor Unix porting skills). Does anyone know how Preview does this or how to get fileschanged working? If so, please let me know...

Update (3 Dec 2005)

Based on several e-mails and another search of the olde web, I realize that my initial kqueue/kevent thoughts are probably correct. The last time I searched for this stuff, I found the fileschanged project (and couldn't get it to work under OS X) but I didn't find these Apple pages (here and here). Now I have another itch to scratch! This search for knowledge and understanding is fun and exasperating.


Apologies to Tinderbox...
Thursday, December 1, 2005

Yesterday, I complained about Tinderbox's HTML entity encoding (I didn't think it was handling ampersands correctly). Today, Mark Bernstein, the creator of Tinderbox, wrote me and showed me an excellent solution. How about that for great support!

Previously, I used encode( ^text(this)) to spit out the <description> of my feed items. The ^text and ^encode are Tinderbox specific commands that get the text of the current note and encode the markup in it, respectively. My contention was that ^encode wasn't working properly because it didn't convert '&'s into &. Mark suggests using: <![CDATA[ ^text^ ]]> instead to tell RSS readers that the contents are just data and shouldn't be parsed.

Mark also points out that the description was initially intended to be just text, not markup and that this has led to a lot of slips betwixt cup and lip.

So, Thanks Mark, Thanks Tinderbox and boo hiss on all this encoding and double encoding confusion.

(additional apologies ... I had some problem's with this entry correctly encoding the caret symbols that Tinderbox uses in its markup and control language!)


Damn HTML entity encodings
Wednesday, November 30, 2005

Picking up where John Wiseman left off a long time ago...

I use Tinderbox to write my blogs. It's far from perfect but is quite customizable and works well enough that I haven't wanted to take the time to switch to something better. On the other hand, whenever I post a link that has embedded ampersands, my RSS becomes invalid. Sucky.


Patch for ASDF-Install
Wednesday, November 30, 2005

I'm not sure who is maintaining asdf-install but the following patch improves the error message when GPG isn't installed or isn't found when verify-gpg-signature/string calls make-stream-from-gpg-command to do its stuff.

If anyone knows of a better place to post this, please let me know.

The patch just adds a condition and a check that the putative call to GPG returned something that looks like it came from GPG <smile>.

196a197,202
> (define-condition shell-error (error)
>   ()
>   (:report (lambda (c s)
>              (declare (ignore c))
>              (format s "Call to GPG failed. Perhaps GPG is not 
> installed or not in the path."))))
> 
401a408,412
>     
>     ;; test that command returned something 
>     (when (null tags)
>       (error 'shell-error))
>

Spreadsheet's uber alles
Tuesday, November 29, 2005

Avi Bryant and company are working on Dabble which aims to provide incremental development for the rest of us using the metaphor of the spread sheet. Meanwhile, Dan Bricklin has just released wikiCalc 0.1 which is for:

for creating and maintaining web pages that include data [that] is more than just unformatted prose, such as schedules, lists, and tables. It combines some of the ease of authoring and multi-person editability of a wiki with the familiar formatting and data organizing metaphor of a spreadsheet.

Must be something in the water.


the MPAA takes security seriously
Monday, November 28, 2005

... the line was moving slowly because they were asking customers to raise their arms so that they could be electronically frisked with a metal detector, and women's purses were being searched by uniformed security guards.

This was at a screening for Derailed in Toronto, Canada. I had the feeling that Canada was a bit more sensible but I guess capitalism is one of those cross border things. The story only gets worse, read it in full at David Farber's interesting people archives.


Altered Carbon
Monday, November 28, 2005

Markus Fix recommended Altered Carbon ages and ages ago (ok, it was really less than two years but that's a decade in internet time <grin>) but it didn't reach the top of my stack until last week. What a wonderful book! Interesting science fiction, great plot, fine characterization; it was a treat to read.

The only thing for me not to like was Morgan's dubious philosophical premise that we can store minds and swap them from one body to another. I'm strongly of the opinion that embodiment has far more importance than the Western tradition allows (see, for example, here). But all fiction requires some suspension of disbelief and Altered Carbon is well worth the effort. Besides, the issues Morgan raises regarding identity, person-hood, and psychology using body swapping as the mechanism make for excellent dream time fodder. Highly recommended.


Microsoft marketing speak - insulting, funny
Sunday, November 27, 2005

Open Source Dorks.


Surprise: Enrollment drops when laws make it hard to enroll
Sunday, November 27, 2005

From the Computing Research Associates bulletin:

The number of international students enrolled in Computer and Information Sciences (CIS) at all degree levels in the United States fell 32.5 percent between 2003/04 and 2004/05, according to the Institute of International Education's Open Doors 2005 report. Foreign students enrolled in CIS numbered 57,739 in 2003/04 and 38,966 in 2004/05.

And lest you think it's an across the board sort of thing:

Among all fields, foreign enrollments declined 1.3 percent. Between 2002/03 and 2003/04, foreign enrollments declined 2.4 percent. Previous to this, foreign enrollments experienced decades of significant growth.

Diversity is strength and that's especially true in science, research, and any other creative endeavor. We've made it significantly more difficult for foreigners to come here and go to school (or even visit). It's a dumb thing.


A digression on error messages
Saturday, November 26, 2005

I was at the library today and a frantic woman asked if I could help her try to burn some files from her floppy drive to a CD on one of the libraries computers. I'm not sure why she picked me; perhaps I looks really proficient at catalog searching?! In any case, I futzed around a while and kept getting messages like "incorrect function" (when trying to view drive F:, er. the CD) and something like "The copy has failed. Please try another CD or give up and go home." Man, windows is

just

so

bad.

To round things out, I came across this wonderful Windows screen shot tonight via 43 folders. Nice.


Beyond Modularity
Saturday, November 26, 2005

A blast from the past (August, 1999!) review of Annette Karmiloff-Smith's wonderful Beyond Modularity:

Karmiloff-Smith proposes to view cognition and development as a series of Representational Redescriptions that occur across cognitive domains/modules and across developmental phases. Her view of mind is flexible and varied. She believes in a process of modularization (contra Fodor who claims that the modules are innately specified and not subject to change) and in a process of multiple domain specific phases (contra Piaget who claimed that development proceeds in domain general stages where the entire system changes at once from one stage to the next).

Representational Redescription (RR) is a model of how implicit procedural knowledge is encoded and re-encoded into more and more explicit forms until it finally becomes declarative. For example, when one learns to play a song at the piano, one first must play the piece as a whole. With time and practice, one becomes able to play parts of the piece without having the start each time at the beginning. Finally, one may be able to improvise with the piece and work with its parts in their own right. This movement from implicit "programmed" procedural knowledge to explicit declarative knowledge is re-enacted across domains and modules in multiple time scales and levels of detail.

The RR framework explains the ubiquitous U-shaped mastery curve: performance at a skill rises to a high level, then drops and then rises again. The RR framework explains this by saying that the initial mastery is due to an implicit understanding of the problem domain. Development continues after mastery is achieved, however, as redescription attempts to bring understanding to the implicit behaviors. The initial redescriptions often fail to properly distill the correct essence of the procedures and therefore produce less adequate performance. As redescription continues, a correct declarative model of the task is achieved and mastery returns.

Beyond Modularity is divided into an introduction; five chapters that discuss RR and development as it relates to Language, Physics, Mathematics, Psychology and Notation / Drawing; and two final chapters that discuss the more theoretical aspects of Karmiloff-Smith's work. Each chapter provides excellent reviews of relatively recent research into child development viewed through the lens of her RR framework. As an example, chapter 3: The Child as Physicist discuses an experiment in which 4-9-year olds were asked to balance blocks on a narrow support. Some blocks were normal, others had a weight glued to one end and still others had a weight hidden inside them. The 4- and 8-year-olds both perform well at the task regardless of the kind of block. 6-year-olds, however, continually attempt to balance the block at its midpoint, regardless of the kind of block that they are balancing. A more careful analysis of the children's behavior shows that 4-year-olds balance by proprioception alone whereas the 8-year-olds correctly classify each block-type and have an explicit understanding of how the weight affects the balance. 6-year-olds, however, have a model of balance that appears to be based entirely on length and their failure to balance the other blocks is viewed as anomalous data that can be rejected. Interestingly, the same 6-year-olds can easily use the very same blocks to build a house. It appears that there failure to balance only occurs when they are calling upon their explicit knowledge of balancing.

Beyond Modularity presents a flexible, engaging and non-dogmatic view of the development of mind. The RR framework appears to fit well with many of the observed behaviors of children and adults as they master new domains. That being said, the RR model remains silent as to how and why redescription actually occurs. What motivates redescription? Why are humans theoreticians and not just inductivists? How does the magic that turns implicit procedures into explicit theories function? Unless these questions can be answered, RR is an interesting implicit story of cognition that needs to undergo its own redescription into explicit form before it can be useful for actually building intelligent systems or truely understanding natural ones.


Don't search the whitehouse?
Saturday, November 26, 2005

This appears to be their robots.txt file. It looks like everything is disallowed?!


A tiny bit of del.icio.us / cl-graph fun
Friday, November 25, 2005

I've been using del.icio.us for a while now and like it. So far, most of my usage has been entirely personal; I use it as a bookmark repository and don't explore other people's tags or posts. The main reason for this is that I already have way too much to read and do and finding stuff to add to my lists doesn't look as if it's going to become a problem any time soon!

I do, however, have strong interests in folksonomy, ontology, and networking and I've wanted to try some simple visualizations of my data. This turns out to be a natural enough reason to let me write a brief tutorial of CL-Graph and CL-Containers. I used the del.icio.us API to get an XML file of all of my posts. I then used the XMLS package to parse the XML into this:

("posts" (("user" "gwking") ("update" "2005-11-21T15:26:00Z"))

("post"

(("time" "2005-11-21T15:25:47Z") ("tag" "yoga health exercise amherst")

("hash" "9aad47baf972813c8202b43a56e95a61")

("description" "Yoga Center Amherst, Massachusetts")

("href" "http://www.yogacenteramherst.com/")))

("post"

(("time" "2005-11-21T13:30:18Z") ("tag" "kids soccer sports")

("hash" "7d2e120f77e7129753b53a9ab74f1763")

("description" "Home - Allsport Soccer Arena - Northampton, Massachusetts")

("href" "http://northamptonsoccer.com/site/")))

...)

Next, I created a class to hold the information about a post (this is probably better done as a structure but I tend to use objects unless I really care about space and time). The defclass* macro is part of Metatilities and mainly exists to save typing all those :initforms and :readers and whatnot.

(defclass* delicious-post ()
  ((post-time nil ia :initarg :time)
   (tags nil ia :initarg :tag)
   (hash nil ia)
   (extended nil ia)
   (description nil ia)
   (post-href nil ia :initarg :href)))
(defun determine-tag-counts (delicious-post-file)
  "Returns a list of tags and their counts from a delicious-post-file."
  (bind ((posts (xmls::parse delicious-post-file))
         (tags (collect-elements 
                ;; the first two elements of posts aren't tags
                (cddr posts)
                :transform
                (lambda (post-info)
                  (let ((tags (find "tag" (second post-info) 
                                    :test #'string-equal
                                    :key #'first)))
                    (when tags 
                      (tokenize-string (second tags) :delimiter #\ )))))))
    (element-counts 
     (flatten tags)
     :test #'equal)))

The next bit o' code reads in the XML file and aggregates all the tags. The funny bits are bind, collect-elements, tokenize-string, flatten and element-counts. Collect-elements is sort of like mapcar but it works for any kind of container both Common Lisp ones like lists, vectors and hash-tables and CL-Container ones like red-black-trees, heaps, and stacks. Element-Counts is another CL-Container's method. It returns an associative list of each unique element (where uniqueness depends on the test) and the number of times it appears.

If I try determine-tag-counts on my posts, I get:

(("techology" 1) ("christmas" 1) ("parallelism" 1) ("mail-and-print" 1) ("alternative-energy" 1) ("review" 2) ("blog-this" 2) ("autism" 2) ("neurology" 1) ("alf" 1) ...)

But I probably really want to see the results sorted. If we change the call to element-counts by adding :sort #'> :sort-on :counts, then we'll get:

(("to-read" 134) ("book-to-read" 66) ("software-development" 24) ("computer-science" 20) ("software" 19) ("programming" 15) ("science" 9) ("been-read" 9) ("social-software" 9) ("unix" 9) ...)

Oh dear. Looks like I'm behind on my reading...

Visualizing Tag interconnections

Neither tags nor posts sit by themselves. If I made a graph with a vertex for each post and another one for each tag and then added a link between each post and its tags, I'd end up with a bipartite graph. To simplify, I could then project the bipartite graph down onto only its tags (or posts) to create a new graph whose vertexes were all tags (or posts) and where two tags (or posts, etc) were linked when they both share a post in the original graph. Here's how it would look in CL-Graph.

(defun create-bipartite-tag/post-graph (delicious-post-file)
  "Creates a bipartite graph of tags, posts and the links between them from 
a delicious post file."
  (bind ((posts (parse-delicious-posts delicious-post-file))
         (g (cl-graph:make-graph 'cl-graph:graph-container)))
    (iterate-elements 
     posts
     (lambda (post)
       (iterate-elements 
        (tags post)
        (lambda (tag)
          (cl-graph:add-edge-between-vertexes g post tag)))))
    g))

Create-bipartite-tag/post-graph parses the XML as before and then makes a graph to hold on to them. Make-graph is a synonym for make-instance and here we make a graph-container (which is more or less a graph represented by an adjacency list but we don't have to worry about that). Iterate-elements is mapc as collect-elements was to mapcar. We use it to call add-edge-between-vertexes for each post and tag.

Now that we have the bipartite graph, we can use project-bipartite-graph to get one the unipartite projection on tags and then call graph->dot to see what it looks like:

(cl-graph:graph->dot
 (cl-graph:project-bipartite-graph 
  (cl-graph:make-graph 'cl-graph:graph-container 
                       :default-edge-class 'cl-graph:weighted-edge)
  full-graph
  'keyword
  (compose 'type-of 'element))
 "user-home:temporary;all-tags.dot"
 :vertex-labeler 
 (lambda (vertex stream)
   (format stream "~(~A~)" (symbol-name (element vertex))))
 :edge-formatter
 (lambda (edge stream)
   (format stream "weight=~D" (cl-graph:weight edge))))

The project-bipartite-graph method takes as input graph to be created (either a symbol naming the class or an existing (presumably empty!) graph), the graph to project (full-graph in the example code), a value specifying which vertexes to project and a function that will be applied to each vertex in the original graph. In this case, I use the function built by composing #'element and #'type-of. When called on a vertex, this will return the type of whatever is contained in the vertex. The way I built the graph, this will be a keyword for tags and a delicious-post for posts.

Graph->dot has lots of parameters that can be used to control the output. Here, I use the vertex-labeler and the edge-formatter to specify a little bit of fanciness.

(Update 2005-11-28) As I learned to my chagrin, the output is huge (5.2 Megabyte). If you'd like to see it, you can click to open full size in another window.

That, however, is a bit of a mess (even enlarged). I'd really like to focus in on one or two tags and see what that looks like. I'll use make-filtered-graph to see only the tags that are linked to "lisp":

(cl-graph:graph->dot
 (cl-graph:make-filtered-graph
  (cl-graph:project-bipartite-graph 
   (cl-graph:make-graph 'cl-graph:graph-container 
                        :default-edge-class 'cl-graph:weighted-edge)
   (create-bipartite-tag/post-graph #P"user-home:temporary;all-posts.xml")
   'keyword
   (compose 'type-of 'element))
  (lambda (v)
    (search "lisp" (symbol-name (element v)) :test #'string-equal))
  :complete-closure-with-links
  1)
 "user-home:temporary;lisp-tags-20051125.dot"
 :vertex-labeler (lambda (vertex stream)
                   (format stream "~(~A~)" (symbol-name (element vertex))))
 :edge-formatter (lambda (edge stream)
                   (format stream "weight=~D" (cl-graph:weight edge))))

(click to open full size in it's own window)

That's more like it. The make-filtered-graph method takes a graph, a vertex filter, a completion style and a depth. In this case, we select the single vertex labeled "lisp" and go out to a depth of one. Then we include all of the links.

I've only touched on a few of the methods in CL-Containers and CL-Graph but I hope I've wet your appetites for more.


Beautiful Lisp logos from Manfred Spiller
Wednesday, November 23, 2005

Sweet!

Via Edi Weitz via Bill Clementson.


Lisp Girl clothing looks great on everyone
Tuesday, November 22, 2005

Marcus Fix of Lispmeister fame has a wonderful new selection of Lisp Girl attire.


Announcing CL-Graph and other stuff
Friday, November 18, 2005

CL-Graph now compiles under SBCL and OpenMCL (under OS X 10.4). There is a Darcs repository, a CLiki page, it's ASDF-Installable, you can grab a tar ball (or is that tarball?) and there is even a bit of documentation. Wonders never cease.

CL-Graph is a Common-Lisp library for manipulating graphs. It's ported from stuff I wrote for my old job and now released under the MIT License. Because my needs were somewhat peripatetic, the coverage of the library is, well, odd. It does provide a good starting point and I'm hoping that the CL community can help build it into a very awesome tool.


Something funny in Macdom
Tuesday, November 15, 2005

I finally got around to reading Neal Stephenson's In the beginning was the command line. You can choose to download the rest in either:

The funny thing is that the "PC Zip" format works great on OS X but the "Mac Stuff it" one gives me this warning:

We live in strange times indeed.


ASDF System Connections
Monday, November 14, 2005

I used a home grown defsystem (EKSL's Generic Load Utilities) at my previous job but have been slowly moving towards ASDF over the last few months. Mostly, the differences are syntactic and it's a matter of mucking from one form to the other -- I know, I know, I should have written a program to do this but...

ASDF is, however, missing one feature that I had only recently added to GLU: the ability to automatically load other systems when they become relevant. An example or two might be in order.

Example #1: Metatilities is my basic tool set and bind is a handy thing. Metatilities and bind are separate tools and can be downloaded and used individually. However, when I have both metatilities and bind loaded, I'd like bind to be available in the metatilities package.

Example #2: Cl-Containers is a container library. CL-Variates is a set of portable Common Lisp random number generation routines. These are separate tools but when both are available, I want to use the routines in CL-Variates to select random elements from my containers.

Currently, I can accomplish both of these tasks by defining additional systems and loading them as needed. No real trouble but it's a hassle. I'd rather be able to express this and have my system definition tool take care of the rest. Thus: asdf-system-connections. It adds a new macro that's almost but not quite the same as defsystem. The addition is a :requires clause that lists the systems upon which this connection depends...

Here is a simple example from metabang.bind's system definition:

(asdf:defsystem-connection bind-and-metatilities
       :requires (metabang.bind metatilities-base)
       :perform (load-op :after (op c)
                         (use-package (find-package "METABANG.BIND") 
                                      (find-package "METATILITIES"))))

The requires clause specifies the other systems that must be loaded before this connection will be activated. The rest of the system definition regular ASDF. ASDF-System-connections will be loaded as soon as the systems their require are all loaded and they will only be loaded once. Before loading a system that uses a system connection, you should load ASDF-System-Connections in the usual manner:

(asdf:oos 'asdf:load-op 'asdf-system-connections)

ASDF-System-connections is brand new and relatively untested. It is available via darcs, ASDF-Install and direct download. Please let me know if anything goes awry.


Technical and Social Features of Categorisation Schemes
Monday, November 14, 2005

Paul Dourish has done interesting things in Computer Support Collaborative Work (CSCW) and other areas too. This technical report offers the thesis that humans and computers don't get along because the usual programming models are neat and the real world is irredeemably scruffy. In particular, the usual Smalltalk like message-passing model of most object-oriented languages fails to talk the different roles that objects can play based on the current situation (note that multi-methods provide some amelioration to this). Dourish talks about predicate classes -- think of #'eql methods with eql replaced by any predicate! -- and subject oriented programming as attempts to make categorization more context based (and I'd be remiss if I didn't give Pascal Costanza a big shout out for ContextL). This is just a note, so Dourish doesn't go into any details but I think that he absolutely correct in saying that computers don't help nearly as much as it seems they should and that part of the reason for this is that the tools we have don't let us express the messiness inherent in our concepts.


Magic Kingdom for Sale: Sold
Sunday, November 13, 2005

Terry Brooks pens a cute, quirky and forgettable book about a jaded lawyer who buys a real magical kingdom, becomes king and saves the day. It's fun but I wouldn't want to live there.


This Perfect Day
Sunday, November 13, 2005

Charles Petzold mentioned this book in an interesting essay about Visual Studio. It reminded me of a short story I read a long time ago so I got This Perfect Day out from the library and read it. It's very readable but mostly predictable. There are some decent plot twists but they don't remain surprising after the fact the way some do.


Richard Cook hacks Google, the OS X address book and more
Thursday, November 10, 2005

Richard Cook (of I need closure(s)) has a great demo using OpenMCL of mashing Google Maps, the OS X Address Book, and OnTok. It's cool, accessing Cocoa in OpenMCL looks really easy.


SICP video lectures for the iPod
Saturday, November 5, 2005

Not only is this way cool but it also gives me a much better excuse to buy one of these beauties! The Structure and Interpretation of Computer Programs is the book that rekindled my interest in programming, computer science and graduate school. It's truly a astounding piece of work.


Emergence of a small world from local interactions: modeling acquaintance networks
Thursday, November 3, 2005

Returning to the "how do you generate a random network" theme, we have this ditty by Davidsen, Ebel and Bornholdt. Rather than growing a network (as in Holme's and Kim's work), they start with a network of some fixed size and add transactions. The model is based on the idea that you meet many of your friends via other friends. Thus, they begin with a graph of size N and a bunch of links. At each step, one vertex is picked at random and two of its neighbors are "introduced" to one another. If the vertex picked has less than two neighbors, it adds a link to some other random vertex. Finally, with probability p a randomly chosen vertex is removed from the graph (together with its edges) and replaced by a new vertex with only one (randomly chosen) link.

When these steps are iterated, the behavior of the graph in its steady state will depend on p. If p is close to one, then the "death" process will equal the linking process and the graph will be very Poissonian (i.e., the vertex degree distribution will be Poisson). If p is small, however, then the introductions will outweigh the deaths and the graph will have scale-free (power-law) degree distribution. It will also have high clustering coefficient and short path lengths. Thus, this simple model produces graphs small world graphs whose degree distribution can be tuned (using p) between scale-free and exponential. Pretty neat.


the Small-world phenomenon: an algorithmic perspective
Thursday, November 3, 2005

This paper has been on my reading list for the last several years and I finally got to it yesterday. When Kleinberg wrote it, everyone and their cousin had been gabbing on about the "Small World" phenomenon and "six degrees of separation". But it was Kleinberg who had the genius to realize that Milgrim's original experiment provided two insights:

  • That short chains of acquaintances seem to connect us all and
  • That people are able to navigate along those chains!

Kleinberg then goes on to generalize the Watts-Strogatz ring model to grids of arbitrary dimension. This provides a more realistic notion of space (and therefore distance). In Kleinberg's model, each vertex is linked to all of its local neighbors (those within a distance of q) and then has q additional links added to more distant vertexes. The kicker is that the probability that a vertex u has a link added to a vertex v will be inversely proportional (to some power r) to the distant between u and v. Different values of r provide different distributions. For example, if r is 0, we'll get the uniform distribution (which is essentially the Watts-Strogatz model). Furthermore, the larger r becomes, the more centralized the links will be.

Kleinberg then goes on to show that navigation is only possible when r is equal to the dimension of the underlying grid. E.g., in a 2-dimensional world, navigation is only possible (in general) when r is 2. What's more, the navigation strategy is simple: at each step, choose the neighbor that is closest to the target. It's a gorgeous little paper: a beautiful result beautifully presented.


Handy OS X Command Line tweak
Thursday, November 3, 2005

I use the T shell (tshell? my gawd, I don't even know how to spell it. How embarrassing.) and adding set complete=enhance makes navigating the OS X command line much easier. As far as I know, all it does is make completion case-insensitive. It's a treat! There were lots of other useful tips here too.


Sony going phoney
Thursday, November 3, 2005

In a clever effort designed to sell more iPods and increase the popularity of Apple's iTunes Music store, Sony corporation decides to install spyware on its customer's PCs. Brilliant. Steal the product and have no troubles. Buy the product and get screwed. Sony is very clever. Um. not.


the Speed of Dark
Wednesday, November 2, 2005

The Speed of Dark is a work of profound and wonderful science fiction. Like the best of its genre, it plays with the possible in order to question the actual. Elizabeth Moon paints a near-future world where Autism can be corrected before birth. The book follows a group of the last generation of autists (the ones born before the cure existed) and one of them, Lou Arrendale, in particular. You can read the plot summaries and other reviews over at Amazon so I won't do that here. I will say that Moon does a marvelous job raising issues of identity, disability, and personal growth. Even better, she doesn't try to answer them. That's up to each of us.


Toothpaste Rant
Wednesday, November 2, 2005

<ranting>

How many kinds of toothpaste do we need? How many do we want? How many is too many?

I feel cheated that I must wade through more than a dozen varieties of a single brand to try and find one that I think my son might like -- there's a sampler for Bob's sake... and 8 different kinds for whitening alone. What a waste of my time; what a waste of resources.

</ranting>

A very cool illusion
Tuesday, November 1, 2005

This is a wonderful visual illusion: dots change color and disappear. Who says what you see is what you get.


Growing scale-free networks with tunable clustering
Tuesday, November 1, 2005

This is yet another network generation paper. As you may recall, the Barabasi-Albert model provides scale-free distributions of the vertex degree (i.e., it's a power law: a few vertexes have a huge number of edges, lots of vertexes have many edges and bazillions of vertexes have just a few edges) and the Watts Strogatz model gives high clustering coefficients (friends of my friends are also often friends) but neither gives both.

Here, Holme and Kim start with the Barabasi-Albert model and add a new triad-formingstep. This makes sense: if you the want the final graph to have more triples, then ensure that more triples are added during graph generation! The exciting thing is that not only do you get the triples (and therefore a tunable clustering coefficient) but you still get the power-law degree distribution.

The analysis in this paper is relatively light-weight but I actually enjoyed that (I'm a computer scientist (heh, heh), not a statistical physicist). It takes a nice idea, elaborates it, shows that it works and wraps it up. Nice.


Steal These Buttons
Tuesday, November 1, 2005

I found Steal These Buttons via Richard Newman's holygoat weblog. It's almost like a new language.


Fire Logic
Tuesday, November 1, 2005

Fire Logic is an odd combination of fantasy, tarot, psychology, military and cultural maneuverings and free love. I found the plot compelling, the writing skillful and the dramatic tensions gripping and exciting. I was left quizzical, however, but the number of sexual liaisons -- I'm probably just an old fuddy-duddy, but many of them seemed to have little to do with moving forward the plot or developing the characters. Oh well, I still enjoyed the book and look forward to the sequel.


Announcing Tinaa (again..., for the first time)
Monday, October 31, 2005

After more than an year collecting dust, tinaa has finally arrived at common-lisp.net. Tinaa is a simple yet extensible documentation system that relies on Lisp introspection to do all the dirty work. The plus side is that many aspects of tinaa documentation can never get out of sync with what's real. Darcs repositories for tinaa will be posted as soon as I can figure out how to get it all to work smoothly.


Meta-point and LaTeX
Monday, October 31, 2005

I really wish I could meta-point when I'm working in LaTeX. Sometimes I just want to know how something was done by looking at the source-code, not at the manuals.


Blue Gene even faster
Monday, October 31, 2005

What Blue Gene is now able to do every second, any person in the world with a handheld calculator would take decades to accomplish.

But why would anyone want to?! Who writes this stufF?

Now, the race is on to reach a petaflop i.e. 1,000 trillion calculations per second, a milestone, which could change the way we look at science, engineering and business, and more importantly, will have IBM and its government partners at loggerheads with Japan, in a bid to reach the target. (emphasis mine)

The business aspects of IBM and Japan are more important than changing the way we look at science, etc. I don't think so. I also don't think that a bald assertion that going a lot faster is going to change the way we look at things is worth much.


Weighted evolving networks: coupling topology and weights dynamics
Monday, October 31, 2005

This letter examines a new model of network growth where both the edge and node weights can vary dynamically as the network evolves. It is an interesting extension of models like that of Barabasi and Albert where the structure changes but the characteristics of the nodes and edges, once added, are fixed. The model uses the vertex's strength (the sum of the edge weights radiating out from a vertex). At each time step, a new vertex is added and attached to m other vertexes where the probability of attachment is proportional to the existing vertex's strength. Then the pre-existing edge weights are altered via a multiplier delta applied to the existing weight proportional to the vertex's strength. If the multiplier is greater than one, then all the edges get stronger; if less than one, then they get weaker. More complex models can be had if the multiplier itself is allowed to vary with time or the local environment. The authors go on to show that this model leads to the usual power law dynamics with exponents in the usual ranges where the exponent can be derived from the value of the multiplier. It's a nice little piece: a good idea developed quickly with interesting results. Aside from wishing I'd written it, what more could I ask for? <smile>


Boycott JPEG?! I never knew
Friday, October 28, 2005

The JPEG standard is the subject of a patent dispute by alleged JPEG patent holder Forgent. This is asinine and far worse than Unisys's silly GIF dispute. I don't understand how a joint standard can be patented by anyone. Rather than boycotting JPEG, I think we should boycott Forgent. Grrr. They disgust me.


Raampant Google Speeculation
Thursday, October 27, 2005

What have these people been drinking? Google crushing eBay? Google making SQL irrelevant? A GoogleSQL with embedded ads (what does that mean, even?!). Google Base or Baseless speeculation.


the Extended Mind
Wednesday, October 26, 2005

A review from the grab-bag of the past...

Through analogy and thought examples, The Extended Mind argues that our cognition, our beliefs, and our self all partake of the world to some degree; that none of these things are trapped within our skull and skin. Clark and Chalmers propose an active externalism "based on the active role of the environment in driving cognitive processes". Note that this is not claiming that cognitive processes take place in the world--they obviously take place in the brain--but it is claiming that these processes are strongly coupled to the external world and social milieu (especially through language).

Quotes I like:

"Within the lifetime of an organism, too, individual learning may have molded the brain in ways that actively anticipate the continued presence of those ubiquitous and reliable cognitive extensions that surrounded us as we learned to perform various tasks." (cf. Frank Keil's talk on how little we really know in spite of our optimism.)

"Without language, we might be much more akin to discrete Cartesian "inner" minds, in which high-level cognition, at least, relies largely on internal resources. But the advent of language has allowed us to spread this burden into the world. Language, thus construed, is not a mirror of our inner states but a complement to them." [down with mentalese!].

All in all, this is a very entertaining paper whose thesis seems less shocking every year. In a world of continuous partial attention, GTD, and more devices than at which you can shake ten sticks, the idea that our selves are not all in ours head makes perfect sense.

Some References I think look interesting...

the Physics of Super Heros
Wednesday, October 26, 2005

James Kakalios is a funny writer and the Physics of Super Heros is a funny book. It also introduces quite a bit of physics in an informal yet rigorous style that surely keeps his brighter students entertained and learning. The premise of the book is to see whether or not what super heros do makes sense if you grant them one fantastic and impossible power. For example, suppose that Superman were really strong, could he jump over buildings? How strong would he have to be? Suppose the Flash had super speed? Would this really let him run up buildings or zoom across the water? I have to confess that I mostly skimmed the book looking for the funny bits (physics is so 25-years ago for me <smile>) but if I ever want to relearn some of the basics, this may be the place I go.


Finders, keepers? The present and future perfect in support of personal information management
Friday, October 21, 2005

First Monday is a vibrant peer-reviewed electronic journal that focuses on the Internet the interactions and it engenders between humans and technology. This paper by William Jones (co-director of the Keeping Found Things Found project along with Harry Bruce) examines information overload as a signal detection task. For each "piece" of information we come across, we must decide whether or not to keep it or toss it. Two kinds of mistakes are possible: we can keep things that we shouldn't have or we can toss things we should have kept. Personal Information Management is all about reducing the costs of these mistakes.

The obvious costs of a keeping mistake (i.e., of keeping something you never use) are going down: storage is cheaper every year, computers are faster, etc. But those physical constraints are bounded by our human ones. As Herbert Simon said:

What information consumes is rather obvious: it consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

(and remember, this was in 1971. He didn't even have a cell phone or instant messaging!). In particular, the more stuff we keep, the more difficult it may be to find what we really need right now and we may even forget that we ever had it -- out of sight, out of mind.

Alternately, if the costs of a losing mistake were zero, then we could keep nothing. This only works if we can also find what we want again as easily as we could have it we had kept it. It also requires that the information "out there" doesn't degrade or disappear. Secondly, the information out there may be hard to find -- my categories aren't shared by the rest of creation so my search strategies may fail. Lastly, it is impossible to find things I forget to look for! Keeping nothing overlooks the fact that what we keep provides reminders of our tasks and projects.

Given that we can reduce costs of keeping and losing mistakes but not remove them, what strategies can we adopt going forward? More and better tools are part of the answer but also part of the problem: each tool tends to create new information federations all of which must be maintained. Keeping things simple is clearly a good idea as is teaching organizational skills to young, old and in between. The problems here are mirrored in the tensions between folksonomies and ontologies: ontologies (can) provide clean classifications amenable to logic but require up front development and have difficulties adapting to a rapidly changing world. Folksonomies (or even persononomies -- now there's a word!) are inexpensive, bottom-up and flexible but may be harder to share and harder to use for automation and reasoning.

This paper, and the work it references, are fascinating and important. Everyone talks about continuous partial attention and information management, but it's time someone started doing something about it!


Science and Math and Amerika
Friday, October 21, 2005

The U.S. is falling behind... The most damning fact:

In 2001 U.S. industry spent more on tort litigation than on research and development.

That would be funny if it wasn't sad and maddening. Of course, America has a great record of foreseeing disasters before they arrive and preparing for them (cf. August 6th briefing of 2001, Katrina hurricane of 2005, Global Warming, Sputnik, and so on and so on).


Funny stuff from 43folders
Thursday, October 20, 2005

A pod cast from Merlin Mann. Funny.


Social Bookmarking Tools (I)
Wednesday, October 19, 2005

This paper from the electronic journal D-Lib provides a nice introduction to many of the tools in the burgeoning field of social bookmarking. It covers both their popular (and here and here) and more scholarly forms. The authors describe the history of linking and bookmarks and the growing architecture of participation (for more on this, see Tim O'Reilly's wonderful essay on Web 2.0) in Amazon, Slashdot, craigslist and so forth. They go on to describe how Connotea combines folksonomy with more traditional ontology-tagging like Dublin-Core. The hope, as always, is that we can find ways to combine the structure and meta-reasoning of ontologies with the vibrancy and liveness of collective tagging without also failing under the weight of spam and exploitation. It's an experiment whose end is still uncertain... The paper closes with a nice review of several tools (many of which I had never heard of).

I didn't find much intellectual high-protein substance in this paper but it's excellent as an overview and links to huge amounts of really cool stuff. Seriously.


Great notes on the software many love to hate
Wednesday, October 19, 2005

Yes, that's software is Microsoft Word -- the worst software ever:

Microsoft Word is a beast. Word is an evolved creation, the bastard offspring of marketing, some original thoughts on how to create a word processor, and generations of Ziff-Davis (PC Magazine) induced rapid mutation to fit someone's distorted checklist. It is to software as the Irish Elk was to mammals. It is an inherently incurable mass of contradictory impulses, which are fully evident in Word's formatting model. It is the single most miserable piece of software that I absolutely must use.


Wow "Big Brother" becomes warm and fuzzy
Wednesday, October 19, 2005

From Symantec marketing material via Ars Technica:

We're providing Big Brother in a box, if you like, to just keep a gentle eye on people. And if people deviate from their normal patterns, we can flag that.

George Orwell is probably rolling in his grave. I especially like that "gentle eye" bit.


Favorite paper title of the year
Monday, October 17, 2005

I have no idea whether or not the paper (PDF) is any good, but "The Study of Delusion in Multiagent Systems" is a great title!


More AppleScript woes
Monday, October 17, 2005

I downloaded a whole bunch of PowerPoint presentations from the IASW 2005 conference (International Something Semantic Web) web site. I wanted to convert them to PDF so that I could import them into DEVONThink -- an awesome application! I think, "Hey, I'll use Automator". But... PowerPoint isn't scriptable and Keynote doesn't have any actions for exporting. "OK," I think, "I'll just write an Applescript." Iterating through the files is no problem; opening them in Keynote is no problem; but saving as PDF. Ah, as Shakespeare said, "there's the rub". Keynote has a save command (as does every application that implements the Standard scripting suite) and the save command has an as argument to specify "the file type in which to save the data". Trying something like

save myDoc as "PDF" in myFile

however, still saved the file as in Keynote format...

So there I was, opening and saving the files by hand... sigh. Computer automation has a long way to go. If anyone knows how to do this in AppleScript, please let me know!


Dan Solove plays at security
Saturday, October 15, 2005

Via Bruce Schneier. Very funny.


slow motion wave in OS X
Friday, October 14, 2005

Hey! I just learned that if you hold down the shift key when you invoke them, many of OS X's cool interface effects happen in sloooow mooootion. Very cool. Very fun.


Dashboard lack of dash
Friday, October 14, 2005

I'm getting to like dashboard but can't understand why I have to wait and watch my widgets refresh for 5-seconds when I haven't used it for a while (only after startup? after sleeping? I don't know). This strikes me as a big usability problem. How much processor power would it take to keep the application refreshed in the background...


Joel sets priorities
Friday, October 14, 2005

Joel is bit prolix this month. The quick summary is:

  • Don't do stuff for just one person
  • Don't do stuff now just you know you'll have to do it someday
  • Do get a group together to collect ideas, tag them for cost, benefit and difficulty and prioritize them collectively by voting.

There are some nice ideas on the mechanics of priority settings at the end of the essay; otherwise, there is not as much there there as usual.


Unimaginably Quick Review: Cognatrix
Wednesday, October 12, 2005

Cognatrix looks like a very interesting program for building your own personal thesari. I'm not sure why I'd want to do that but if I ever do, I'll know where to turn.


Shells
Tuesday, October 11, 2005

Though I've never tried it, I've always loved the idea of using Scheme as my Unix shell. Olin Shivers has a 1994 paper that describes it beautifully. Today I came across a 2001 article from Linux Magazine (of all places). It's not that exciting but does offer some good examples. I downloaded the Scheme Shell source but the configure / make magic didn't work for me under OS X. because of lots of incompatible implicit declarations. There is a Fink distribution but I'm not sure I want to go there. If anyone has successfully built the Scheme Schell under OS X, I'd like to hear about it.


the Opal Deception
Friday, October 7, 2005

Eoin Colfer writes exciting books with intricate plots, clever characters and a free wheeling love of futuristic science and mythical creatures. The Artemis Fowl series explores the convoluted doings of elf Holly Short, centaur Foaly, dwarf Mulch Diggums, evil boy genius Artemis Fowl, his bodyguard Butler. Don't be taken in by the punny names; the books may be written for young adults but I like them too. That said, this fourth book lacks much of the freshness of the previous three. It's hard not to start to get derivative when you have to repeat yourself and I'm hoping that Colfer moves back to some of his other ideas like the Supernaturalist.


Tinaa is not an acronym
Friday, October 7, 2005

Tinaa is a Common Lisp documentation system I did some work on a year or two ago and then left stranded. I've brushed it off a little bit so that I could use it to document CL-Containers, metatilities and moptilities. Unlike some other documentation systems that trundle over the source, Tinaa relies on Lisp's introspection to build up its picture of a system. One thing it makes evident is that I have a lot of documentation to write! Oh well, at least many of the names are semi-self-explanatory.


Quick Review: CSSEdit
Friday, October 7, 2005

CSSEdit is... wait, you'll never guess. Let's have a drum roll please... Yes, it's a CSS editor for OS X.

I'm very far from a CSS maven and when I must muck with it, I spend about 85 percent of my time looking up options and fiddling with parameters to see what it is they do. CSSEdit looks to speed up the process by providing a complete UI for all of the CSS parameters and a live HTML preview. This means that you don't need to look anything up; you just fiddle and see what happens! If you know what you're doing, however, you can easily switch to the source editor and type away. I was impressed with its general ease of use and speed. The only thing missing is a nice hierarchical view of how the different styles inter-relate (or maybe I just couldn't find it). Xyle Scope has that but the CSS editor in CSSEdit seems more usable. If CSS is part of your life, then this might be a good way to spend $25.


cl-containers moves towards reality
Thursday, October 6, 2005

I've decided to get cl-containers out without worrying about ASDF. This means I should be able to stick what's needed up on the web site by the end of this week (oh, oh, I've almost made a commitment). This includes:

  • cl-containers
  • metatilities (everyone needs their own set of matching utilities)
  • moptilities (everyone needs their own MOP layer too)
  • generic-load-utilities

Most of this will be released using the MIT license although some of the code comes from long ago and far away and has it's own (quite unrestrictive) license. Once I've released, lots of good stuff will remain to do (asdf, testing, making sure it's platform compliant, etc). As always, stay tuned.


The fork is irrelevant
Thursday, October 6, 2005

One of my favorite movies is I hear the mermaids singing:

Gabrielle: It's an external transformation [of subject].

Clive: Internal.

Gabrielle: External, look at the lemon.

Clive: What about the fork?

Gabrielle: Oh, the fork is irrelevant!

Clive: Yes, you're right, the lemon does it.

Ah, modern art.


Lisp meets XP
Tuesday, October 4, 2005

Lisp meets XP in Australia. The results are extremely successful.


Quick Review: Avenir
Tuesday, October 4, 2005

Avenir is a writer's tool for those who want to write without futzing with the formatting, worrying about Word, or tangling with a too basic text editor. Its claim to fame is the addition of top level structure (e.g., you can structure a single document into multiple chapters) and excellent support of notes and annotations. All too often when I write, I find I need to keep a second document open to record notes, jottings, random ideas and to-dos. Avenir solves this problem both by giving you a place to record notes about each part of your document (and each character in your novel) and by supporting annotations on any part of your writing.

Pros

  • Full screen writing mode -- no distractions
  • Annotations, and extra places for notes
  • Document structuring

Cons

  • The UI is a big "clunky": I lost some work because I couldn't figure out to get out of full screen mode. When you click add, you still have to click again (or hit enter) before you can type, and the full screen mode is too wide. I shouldn't have to turn my head so much while typing.
  • You can't make annotations of annotations, not can the same piece of text have multiple annotations
  • The outlining is weak. It seems there is only a single level.

That said, Avenir looks like a step in the right direction and is probably a much better tool for general writing than Word. It's a good deal for $20.


(Year old) Interview with the inventor of darcs
Tuesday, October 4, 2005

David Roundy has done a lot for version control and Haskell with his invention and implementation of darcs. I've been playing around with it recently while trying to get cl-containers off the ground. It's very cool.


Damn, why didn't I do this earlier!
Tuesday, October 4, 2005

The inimitable Bruce Schneier points to this new NSA patent (6,947,978):

Method for geolocating logical network addresses

Abstract: Method for geolocating logical network addresses on electronically switched dynamic communications networks, such as the Internet, using the time latency of communications to and from the logical network address to determine its location.

I was writing code to (try to) do this a month ago using latencies from the Network Time Servers. Of course, I didn't finish the project so I guess they deserve the patent more than I do . It's a good idea but I don't understand patents enough to understand why it should be patentable.


Towards the Next Generation of Enterprise Search Technology
Sunday, October 2, 2005

This is the lead off paper for the IBM journal's special issue on their Unstructured Information Management Architecture (UIMA). As such, it's pretty bland. UIMA hopes to bring Natural Language Processing (NLP) and other smarts to search so that web pages are no longer seen as a big bag of words. IBM is big on this and their Alphaworks site is even releasing Java source code to help. I think that adding semantics to the web is vital and that the tools from artificial intelligence are going to be necessary to make it happen. UIMA is a step in the right direction. If you don't believe me, see what Jon Udell had to say!


User Tailorable systems: Pressing the Issues with Buttons
Sunday, October 2, 2005

Proving that a bad pun doesn't completely ruin the chance for publication is only part of what makes this a fun read. The authors tackle the steep slopes between what they can workers, tailors and programmers by introducing some simple customization technologies and working on the culture that will use them. That culture matters is so obvious that almost all technologists ignore it completely -- which explains why so many technologies fail. Most people using computers don't understand (and don't want to understand them), don't know what is easy and what is hard and feel lucky to get through their days without losing their work (I have no evidence for this but still think it's true!). Many people that know something about customization do it badly or are stymied by problems that they feel should be simple. Finally, many people that know how to use computers have forgotten what it was like when they didn't and don't know how to explain their knowledge usefully.

The buttons discussed in this paper were UI widgets that could be tailored in both the usual graphical ways (shape, color, position, etc) and in their behavior (by scripts or programs). (In part) Because this was done on a Lisp Machine, working buttons could be emailed and embedded in documents. This let users organize their buttons, tailor their buttons and share their work. Furthermore, a new position (the handyman) was created to mediate between the workers and the tinkers and programmers. Because the buttons were cultural artifacts that could be cloned (they used a prototype system instead of an inheritance system), experimentation was simple and the buttons were quickly accepted. Because the learning slope was more relaxed and because the buttons were useful, people used them and learned about them and more further up the slopes. Here is a list that shows the steps along the learning curve:

  • move around the screen
  • receive in e-mail
  • situated creation
  • copying
  • changing appearance
  • editing parameters
  • modifying Lisp
  • Using building blocks
  • Lisp programming

It's easy to see why so many people could be brought away from the flat plains and towards the hills!


Friday Bat Blogging
Friday, September 30, 2005

Oh oh, looks like the evil Spell Binder has changed the "C" to a "B". I hope Letterman can save the day.


Toolglass and Magic Lenses: the See-through Interface
Friday, September 30, 2005

Another old (1993) paper that illustrates the current lack of imagination in computer interfaces. I really like OS X and Spotlight is cool but having tools that really exploit the power of modern computers would be nice. Maybe someone could make a set of magic lenses that use Core Image to do very cool things... The trouble (?) is that the model has to be open so that lenses external to the application can perform their magic.

The paper introduces two handed manipulations where the non-dominant hand does coarse positioning and the dominant hand does fine manipulation. Lenses and semi-transparent tools that can do everything from magnifying to altering the visual presentation to making changes in the model. The authors presents oodles of interesting tools most of which existed.

Perhaps it's no surprise that the implementation language (and platform) was Xerox Lisp.


Simple file parsing in Common Lisp
Friday, September 30, 2005

Here's some code I wrote to read a file of transactions from my bank and convert it into a nice list. The data looks like:

09/29/05
Checking
EFT PAYMENT Comcast FSBlink Chan
$-58.26

09/27/05
Checking
Check 564
564
$-10.00

I.e., each transaction is spread out over four (or five) lines. My code reads the lines, munges transactions together and then post-processes then into an easier to read format.

The Lisp code is readable but a bit verbose... I know that the munging step could be done with a regular expression but, sadly, I'm not expert enough to be able to whip one up that would work (especially one that would deal with either four or five lines for each transaction).

Oh, there's also a bunch of personal utility functions in the code (e-mail me if you're that interested!). This includes: collect-elements, map-lines-in-file, string-trim, bind, time-month and time-date.


(defun parse-transactions ()
  (collect-elements
   (let ((buffer nil)
         (result nil))
     (flet ((parse-buffer ()
              (when buffer
                (push (nreverse buffer) result)
                (setf buffer nil)))
            (add-line (line)
              (push line buffer)))
       (map-lines-in-file 
        (lambda (line) 
          (if (zerop (size (string-trim " " line)))
            (parse-buffer)
            (add-line line)))
        "p2dis:data;transactions")
       (parse-buffer)
       result))
   :transform
   (lambda (transaction)
     (bind (((date nil comment amount-or-check &optional amount) transaction)
            (date-and-time (apply #'encode-universal-time
                                  (multiple-value-list 
                                   (parse-date-and-time-string date))))
            (is-check? (and amount 
                            (string-equal comment "check " :end1 6)))
            (dollars (parse-integer 
                      (remove #\, (if is-check? amount amount-or-check) 
                              :test #'char-equal)
                      :start 1 :junk-allowed t))
            (kind nil))
       (flet ((fixup (find remove the-kind &optional replace)
                (when (string-equal comment find :end1 (min (size find) (size comment)))
                  (setf kind the-kind)
                  (when remove
                    (setf comment 
                          (subseq comment (min (size remove) (size comment)))))
                  (when replace
                    (setf comment replace))
                  (values t))))
         (or (fixup "EFT" "EFT PAYMENT " "EFT")
             (fixup "POS" "POS WITHDRAWAL (DBT)" "POS")
             (fixup "ATM" nil "ATM" "Withdrawal")
             (fixup "External" "External Withdrawal" "EFT")
             (fixup "Deposit" nil "dpt")
             (fixup "Insufficient" "Insufficient Funds/Ovdft. Fee " "---")
             (fixup "Overdraft Protection" nil "opd")
             (and is-check? (setf kind "CHK")))
         (list kind comment dollars (time-month date-and-time) (time-date date-and-time)))))))

Memory Management
Wednesday, September 28, 2005

John Siracusa says that the lack of a language / API with good memory management may be Apple's next big nemesis. It's an interesting thought and the comments are good too.


Time as Essence for Photo Browsing Through Personal Digital Libraries
Tuesday, September 27, 2005

Digital cameras lead to lots of pictures and hierarchical file systems aren't up to the task of managing them especially if the photographer doesn't have the time to invest in labeling, captioning and organizing. The authors of Time as Essence realized that most pictures are taken in batches - a birthday, a trip, another birthday, a holiday, a party - and that the pictures taken at these events often come in sub-batches - the arrival, the opening of the presents, the eating of the cake. From here, it's a relatively simple matter to cluster the stream of pictures hierarchically and present an interface that uses this structure to help user's find the photos for which they look. This paper lays out the problem, provides two photo browser solutions and then compares these with a commercial offering in a series of experiments. Perhaps not surprisingly, the time-clustering browsers bested the simple linear one for photo finding. More interestingly, the simpler of the two browser bested it's cousin (which revealed more structure in its interface which led me to think it would score more highly). The great thing about this work is the understanding that people don't want and often aren't able to organize the masses of data they swim in. Systems that the can find structure automatically are a big win. The other great thing is the reaffirmation that user's want simple interfaces (go Apple go).


Girding for the Grid
Tuesday, September 27, 2005

Interesting article by Carole Goble of myGrid UK (isn't that cute, myGrid!? I think they should have called themselves iGrid). She discusses the different uses biologists and physicists may have for Grid Computing. There is a huge need for extremely flexible, loosely coupled and adaptive systems. I think that Richard Gabriel is right when he says that we need to move away from axiomatic systems and towards biologically inspired ones (indeed, that's part of why I love Lisp). My slogan for it is: patterns not protocols. Now all I need is an actual example but that may be a bit of work!


John Gruber on AppleScript and English
Tuesday, September 27, 2005

AppleScript tries to read like English which sounds good. But English is incurably vague and mixing vague with computers leads to suffering (which leads, as we know, inexorably, to the dark side).

This is AppleScript at its worst. It was a grand and noble idea to create an English-like programming language, one that would seem approachable and unintimidating to the common user. But in this regard, AppleScript has proven to be a miserable and utter failure.

This is partly leaking abstractions and partly the wrong tool for the job. It's also a good lesson for some of what we need to think about when creating Domain Specific Languages.


Building Bridges: Customisation and Mutual Intelligibility in Shared Category Management
Thursday, September 22, 2005

The mix of sociology and computer science provides fertile ground for making tools that make work work better instead of making work more work. It also leads to papers with longer sentences and words like "artifacts", "appropriation", and "mutual intelligibility". This paper by Dourish et. al. explores how a government agency (the Department) deals with the long term categorization problems involved in tracking projects (like building a bridge) from inception to completion and maintenance. The project documentation must be categorized for many different groups using government mandated categories (which occasionally change and which don't always fit the task at hand). To augment the paper categorization, Dourish et. al. provide the metaphor of layered sheets (akin in some ways to magic lenses) which add, remove and modify categories in the system. Each group and individual can create their own set of sheets to structure their work. The sheets don't change the underlying categorizations so user's can understand each other's work by adding and removing them. Thus the architecture supports both customization and intelligibility.

This looks like a nice piece of work but it appears that not much has happened since 1999. We finally have Spotlight, Google Desktop and Windows Vista will be here someday. But none of these come close to the sorts of things offered in research labs 10 and 20-years ago. Sigh.


Kleinberg wins!
Tuesday, September 20, 2005

Jon Kleinberg wins a MacArthur award! And to think, I mentioned him on this very weblog. I'm clearly much more influential than I realized. Seriously, though, Kleinberg has done beautiful work and it wonderful to see it recognized. It's great fun to see the rest of the winners too. The world needs people like this more than ever!


Learning Structured Representations
Tuesday, September 20, 2005

Shastri and Wendelken present an ambitious connectionist architecture that can "encode a large body of semantic, episodic and causal knowledge, and rapidly make decisions and perform explanatory and predictive reasoning." My favorite bit is the technique of randomly sampling from the known types and entity spaces while recruiting other unused "nodes" to tie them together. If the connection make sense (has utility), then it will be strengthened over time; otherwise, it will decay. To me, that sounds about right. The authors are also working very hard to biologically motivate their work.


David Rumsey maps it out
Tuesday, September 20, 2005

From IT Conversations: David Rumsey talks about maps at Where 2.0. It's a fun talk but I wish I could have seen the visuals.


Putting it together: NetNewsWire, del.icio.us, and Cocoalicious
Tuesday, September 20, 2005

Peter Rukavina has a nice screen cast combining NewNewsWire, Cocoalicious and del.icio.us.


Chris Anderson of Wired Magazine
Sunday, September 18, 2005

Chris Anderson of Wired Magazine talks about the long tail at ETech 2005. He has some interesting ideas about how the connectivity of the internet alters the shape of popularity. I'm not sure he's right; it seems more like the internet would alter the scaling factors without altering the overall shape... still, it's fun stuff.


Irving Wladawsky-Berger
Sunday, September 18, 2005

Irving Wladawsky-Berger from IBM talks about IT infrastructure, Open Source, autonomous systems and the like at the Open Source Business Conference 2005 (ITC). I found the presentation interesting but a bit glib: being more connected doesn't mean being more understanding and having everything in XML doesn't mean we're communicating. Also, having more knowledge can become overwhelming and the search for optimizations can leads to extreme fragility.


Even their images are derivative
Thursday, September 15, 2005

Separated at birth?


Nice writeup of Cocoa delegation and notification
Thursday, September 15, 2005

This should be not much news to seasoned Lispers but it's a good write-up by Eric Buck:

Sometimes, however, loose integration and a loose coupling are better. Although subclassing is a powerful reuse tool, it is ironic that subclassing can also increase one of the most common obstacles to reuse, namely the unnecessarily tight coupling of code.

The challenge is staying loose while staying correct.


Bill Clementson demos ContextL
Thursday, September 15, 2005

Bill has a nice write up and demo of ContextL.


Semantic File Systems
Wednesday, September 14, 2005

Before there was Spotlight, before there were Placeless Documents, but not before the Cannon Cat, there were Semantic File Systems. This paper by Gifford et. al. outlines the basic approach: hide the file system from users behind a more or less transparent query interface and build the database for the query using various transducers to interpret files more or less intelligently. It's good stuff. It also shows how far we still have to come before our systems start to work as well as what was in the lab almost twenty years ago.


Folksonomy: stop worrying and love the mess
Tuesday, September 13, 2005

A nice panel discussion from ETech 2005 between Joshua Schachter (del.icio.us), Stewart Butterfield (Flickr), Jimmy Wales (Wikipedia) and Clay Shirky (educator and technology malcontent (in a good way!)). I appreciated the insightful and obvious in hindsight distinction between:

  • your data for you (Flickr)
  • your data for everyone (Wikipedia)
  • other people's data for you (del.icio.us)

It's well worth hearing if you think this stuff is cool.


Emerging Technologies and the military
Tuesday, September 13, 2005

From IT Conversations comes this presentation from ETech 2005. The main topics were a military Flickr system for sharing photos taken by soldiers, using small satellites to provide better information at the edge (though this seems more like wishful thinking unless there are good means to prioritize. Everyone wants everything now, now, now), and the use of fast 3D data capture. Cool stuff though parts were very visual and lost something in the podcast translation <smile>.


You say "peak", I say "pique"
Tuesday, September 13, 2005

This just in from the field (thanks!): Google says that "pique my interest" has 38,500 hits making it the winner (see here for previous contestants). Not surprisingly, the dictionary agrees. I'm happy. I was sure that "peak", "peek" and "peque" (ugh!) were wrong.


Speaking of pequing
Monday, September 12, 2005

Google returns about 14,800 hits for "peak my interest", 1210 for "peek my interest" and 20 for "peque my interest". All of these seem too low for my tastes. Is there another way to spell "peak"?


Very quick review: Formations
Monday, September 12, 2005

Ted Goranson mentioned Formations in a recent About This Particular Outliner column. It's an eclectic, organizer / project manager / PIM tool for the Macintosh. Formations has a busy UI feels cluttered and confusing to me even though it also reveals some interesting features and ideas. I like the Address Book integration and the ability to define multiple views of the same data. Integration with e-mail and dictionary lookup is also a plus. On the whole, however, Formations remains too much of a traditional organizer to really peak my interest.


Minor Tiger quibbles
Monday, September 12, 2005

I've finally upgraded to Tiger on my main work machine. Overall, it's cool. Spotlight is nifty though not magical; Dashboard is cute but not yet a part of my workflow; I use PathFinder (when will they finish 4.0 already!), NetNewsWire and OmniWeb so some of the underlying changes aren't obvious to me. Nonetheless, I'm happy with Tiger. It works, it rocks and it's solid.

On the other hand, Apple new found penchant for UI confusion and inconsistencies is getting to be a bit depressing. Take Key Chain Access (KCA) for example. I love the keychain. It's a great idea and is well implemented. I like how the new version of KCA sorts the keys into different categories and includes a search (two of my long time secret wishes). What I don't like is how the key to delete a key is DELETE (not Apple DELETE the way it should be). I also don't like that deleting a key doesn't make the key disappear (at least not when you're looking at a subset of keys). That's wacko behavior and completely bizarre from any perspective. Apple is supposed to get UI. What's happening.


Language Constructs for Context-oriented Programming
Friday, September 9, 2005

Pascal Costanza and Robert Hirschfeld remind us that things in the world usually have multiple roles (I'm a father, a programmer, and occasionally a human being). These roles come and go dynamically, sometimes overlap, and demand different behavior. Software languages have generally ignored this complexity; leaving the problem to the creative invention of legions of programmers (though Cocoa and Objective-C do provide for the related but simpler behavior of delegation). Since context is so important, it makes sense for languages to support it as a primitive. Costanza nd Hirschfeld then go on to demonstrate the power of the CLOS MOP by implementing a strong language candidate before our eyes (admittedly, with most of the details left under the rug but the source is available!). Context-oriented programming feels like Aspect-oriented programming turned on its side. In AOP, things are expressed "on the side" and then woven throughout the program (e.g., logging or persistence). In COP, things are still expressed "on the side" but now they are treated as multiple layers that can be turned on and off dynamically at run time. This is an excellent paper: clearly written with a strong motivating example and good literature review. It's wonderful to see language design coming back into fashion.


Good Table creation LaTeX hints
Thursday, September 8, 2005

Can be found on this web page. I've long wondered how to do some of these things!


They do a nice job...
Thursday, September 8, 2005

Man do I ever want one. The ROKR seems a bit brain dead in comparison. Why not an iPod form factor with the click wheel serving as as a keypad for numbers? As Andrew observed yesterday, they can obviously handle 4 position sensitivity, why not 10? Frankly, I agree with some of the bystanders that Apple could easily move into the phone design market.


SELF: the power of simplicity
Wednesday, September 7, 2005

This old paper compares and contrasts SELF and Smalltalk. SELF takes the Smalltalk ideas of simplicity and minimalism to even greater extremes. It has no classes (only prototypes), no "free" variables (only objects), and merges state and behavior almost seamlessly.

I've always wanted to play with SELF but still haven't managed to get a working version under OS X (though one exists). It blends a beautiful set of ideas into a really interesting mix.

(update: I googled "self os x programming while I was writing this. Now I've downloaded the SELF environment and am fooling with it! Cool).


Danny Hillis talks about stuff he's done
Wednesday, September 7, 2005

Danny Hillis (of Connection Machine fame and more) talks about what he does for his day job at Applied Minds. Most of the talk is unfocused: a grab bag of cute tricks, robots and interesting falderal. He ends, however, with some ideas about how we might turn the whole internet into a public data store. Worth listening to... quickly.


Using Properties for Uniform Interaction in the Presto Document System
Friday, September 2, 2005

Like Lifestreams, Presto attempted to make computers easier and more powerful. Both projects realized that file systems are an artifact of the computer and usually are more bane than benefit. Where Lifestreams organized everything by time, Presto uses a folksonomy like property system (though properties are typed and can be just about anything). The system did a nice job of incorporating legacy applications via clever manipulation of the file system, allowed services (software agents) to mark up documents automatically, and had some nice user-interface ideas about adding stability to dynamic collections. For some reason, however, Presto is no more. The Placeless documents project at PARC ended in 1999 and the Harland project follow-on ended sometime later. Sad.


I'd like to say it was fun, but
Thursday, September 1, 2005

Applescript still leaves me cold! Firstly, there's all this syntax. Syntax is fun when you're a teenager and it gives the illusion of mystic power and magical incantations. After that, it just gets in the way. After using a language in the Lisp family, going back is just painful. Secondly, I don't know the language and getting anything done is always harder than I expect. Thirdly, it's a full featured, powerful language but it's still missing the little things (like a minimum function of goodness sakes). Enough carping, however. I wrassled with the my Script Editor (still oddly broken, by the way) long enough to be able to export my Address Book into a lispy format that I can now play with. Trivial? Yes. But there it is. (Here's the source in the low probability case that someone else might want to do this).


Lifestreams: a storage model for personal data
Tuesday, August 30, 2005

Lifestreams is a storage model that hides the file system and indexes everything according to time. Its chief motivations are

  • that storage should be transparent (we shouldn't have to come with file names),
  • that directories (and hierarchy) are lousy structuring mechanisms,
  • that archiving should be automatic,
  • that smart summaries should be possible,
  • that reminding should be convenient, and
  • that personal data should be accessible anywhere and everywhere without compatibility headaches.

Lifestreams is an answer to these observations. It is a "time ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream." Lifestreams are organized on the fly via find operations (think Apple's Spotlight). Archiving is automatic because streams are naturally organized into past, present and future. To see what you've done, you dial the viewport backwards; to set reminders, you dial it forwards (Freeman and Gelernter claim this is intuitive but I'd like to see the user tests. To me it sounds like a cute idea that only abstraction loving computer scientists would love...)

The paper includes several examples such as a contact management, e-mail and bookmark sharing (Lifestreams are a fairly natural way to implement something like del.icio.us). There are a lot of good ideas here but it doesn't seem as if enough attention was paid to how people actually use their computers to do their work. Indexing by time is helpful, yes, but we are also very spatial creatures and need to be able to structure our work in a multitude of ways.


Quick Robin, get the bat-bot
Tuesday, August 30, 2005

From Wired (via American Scientist) we present the bat-bot: the first robot that uses in-air sonar effectively for sensing and navigation. It's modeled closely after real bats but we humans have a long way to go:

For all its sophistication, the Bat-Bot still can't hold a candle to its biological progenitor. ... [relies on a] series of powerful computers that crunch through acoustical data from about 750 frequency channels in each ear.

... It turns out a bat's hearing is as complex as it is acute, with hundreds of thousands of frequency channels in each ear, and as many neural receptors, totaling "perhaps a million separate elements,"

"The real challenge is to find a way to duplicate the tremendous parallel processing power of a bat's brain," Simmons said.

A brain that's the size of a pea, he adds.

Now that's high density computation!


the Peace War
Monday, August 29, 2005

Imagine you're an upper level anti-war bureaucrat. Imagine one of your scientists could encapsulate space and time in impermeable bubbles. Imagine you decide to end war and create peace by enclosing all the opposition in your bubbles and becoming the de-facto ruler of the world... If you managed to imagine all that, you have the setting for Vernor Vinge's the Peace War. Like the rest of Vinge's work (here for example), this has excellent plotting and imaginative science. It's a treat worth reading more than once!


Marooned in Real Time
Monday, August 29, 2005

Vinge is an engaging writer with great plotting, interesting characters and wonderful ideas. Marooned in Real Time explores a murder mystery (with one of the most ingenious weapons ever devised!), the nature of technological change, and the implications of controllable time stoppage - imagine what you could see skipping through the millennia .. Marooned in Real Time is a treat.


Nanotubular
Tuesday, August 23, 2005

You can never be too thin or too strong:

The nanotube sheets are about 2 inches wide and just 50 nanometers thick, or about 2,000 times thinner than the width of a human hair. At this thickness, 250 acres of a solar sail made of nanosheet material would weigh less than 70 pounds.


Freakonomics
Sunday, August 21, 2005

Perhaps this book received too much positive press for me to find it too tremendously compelling. It's a good book; it's an interesting book; it's a fun book. Contre Malcolm Gladwell, however, I was not "dazzled." Besides, I think the title is too damn cutesy.

Freakonomics is well written, eclectic and vibrant. The two Steve's are made up of one economist (Levitt) and one writer (Dubner). It's a good combination. Levitt is a wunderkind who has applied economical thinking (that's the style of thinking found in economics, not thinking that makes good use of its resources...) to all sort of non-economical problems. This lets him provide unexpected answers from here, there and everywhere. How do school teachers cheat? What caused crime to drop in the 1990s? How are McDonald's and crack gangs the same? How important is parenting style to school grades? (the more important question of how important parenting styles to life outcomes or school grades to the kind of person you become are not, unfortunatly answered...).

The results may not be dazzling, but they are worth reading.


Three bugs in 512-bytes
Friday, August 12, 2005

Who says you can't go wrong in the small! (via xbox-linux via Bruce Schneier)

512 bytes is a very small amount of code (it fits on a single sheet of paper!), compared to the megabytes of code contained in software like Windows, Internet Explorer or Internet Information Server. Three bugs within these 512 bytes compromised the security completely - a bunch of hackers found them within days after first looking at the code. Why hasn't Microsoft Corp. been able to do the same? Why?


The Pleasure of Finding Things Out
Tuesday, August 9, 2005

This book is a wonderful introduction to Feynman's non-technical work. It includes interviews, speeches, and reminisces from throughout his long and productive career and it shows him clearly as both jester and thinker. I found his discussions of what made science "science" particularly relevant in this time of incipient repression that would have made Galileo fear for his skin. The great benefit of science, says Feynman, is that it shows that it is possible to live life while doubting. We don't need to have the answers to everything. It's OK to keep thinking, and trying, and having new ideas. Scientific thinking, in other words, is an anodyne to fear.


Fully Distributed Representations
Monday, August 8, 2005

Pentti Kanerva made a name for himself way back in 1988 with a little book called Sparse Distributed Memory. In it, he outlined a computational model of memory that made sense from both computational and neurological perspectives. In this 1997 paper, he builds on the work Plate, Hinton, Pollack and others to describe a simple distributed representation for stuff that the rest of us would store in fixed size records with fields. The representation is slightly reminiscent of bloom filters: represent each thing (field name or value) as a very long random bit string (you can use vectors of reals or complex numbers too); (name, value) pairs are bound together with pair-wise boolean exclusive-OR; sets of these bound pairs can then be chunked together according to majority rule (i.e., each bit in the result vector is set according to the value of the bits that appear most often with ties being broken at random).

The amazing thing is that even with all this randomness and bitwise operations, the resultant chunked vectors retain a similarity to the pieces from which they were derived and you can pull out pairs and values from the vector in a variety of ways that represent both regular lookup and more analogical search. In contrast to 'normal' representations where a single flipped bit brings all to ruin, these holistic (holographic) representations handle noise and combine "structure and semantics" so that similarity actually reflects meaning. Since you cannot continue to chunk values together without a disastrous loss of information, the encoding also might explain George Miller's magical 5 plus or minus 2.

In sum, this is a wonderful, eye opening paper that combines math, mind and amazement.


I've been walking more so
Monday, August 8, 2005

I've had more time to listen to IT Conversations.

  • Scott Cook of Intuit talks with Larry Magrid about Quicken, TurboTax, QuickBooks and how companies can be a source of good. It's refreshing to hear someone in business say that companies exist to serve society (and that if they fail to, they should be limited!). Cook also makes a very strong case for ease of use as being the deciding factor in software adoption and the reason for Intuit's success.
  • In a disjointed co-talk, Jim Buckmaster and Craig Newmark of Craig's list talk about the technology and the vision behind their company. Interesting but a bit disappointing. The best bit is Craig's summary of how the list manages itself and maintains trust in spite of openness.
  • Moira Gunn talks with Tim Cook of Isis research about the conflicting goals of university research and commercialization. As Cook says: "The main functions of universities were teaching and research and I see technology transfer as an important byproduct but as a byproduct none the less Because if the technology transfer drives the research agenda then the university turns into a commercial contract research company and so who fulfills the role of university in our society?"
  • Finally, Tom Igoe from NYU talks about networking objects in the small pieces, loosely joined sense. He quickly describes a few dozen of student works in the design space of interconnected objects.

Have fun listening.


Fear and Other Uninvited Guests
Saturday, August 6, 2005

Fear is something we'd rather pretend we didn't have. Fear is something we relegate to the back of the mind, the back of the burner, the back of the closet. But fear is a companion we walk with whether or not we wish.

Harriet Lerner's book provides a taxonomy of fears and gently examines how we can sit with them rather than running. Just as there are no easy answers to life, there are no easy answers to fear. Instead, fear is something that we must learn to accept because failing to accept doesn't avoid it or save us. It just leaves us defenseless and alone.

The goal, I think, is to act correctly regardless, to understand fear not as a failing but as a warning to pay more attention to what life is trying to say.


Google alerts
Friday, August 5, 2005

If you'll willing to let Google save your searches by signing up, they have an alert service that will do a search for you daily and send you an e-mail when there is something new (there is also this similar service). I put in "Lisp help" for fun and have so far found the following:

  • From OSCon: Damian twisted minds and code in 5 dead languages (Lisp, PostScript, C++, SPECS, and Latin), which somehow involved dozens of scary pictures of Russian Lara Croft imitators.
  • From the Wichita Eagle, a review of Play it Again Sam: Allan's married friends, Linda and Dick Christie, step in to help him re-enter the dating scene ... Tough guy Paul Ramondetta ably plays Bogey. He skillfully assumes his look and his characteristic slight lisp.

One for two isn't bad! <smile>


Syntax analysis in the Climacs Text Editor
Friday, August 5, 2005

I've been mulling morosely over the state of Lisp IDEs and development tools for, well, about as long as I've been working in Lisp. As such, it was great fun to read Christophe Rhodes, Robert Strandh and Brian Mastenbook's paper on the Climacs Text Editor. The paper is a high level description of some of Climacs innards and leaves something to the imagination (but that's why there is source code, right?). Nonetheless, it provides a good picture of some of the technicalities that must be managed in modern graphical text editors. It's funny that something so simple is so hard. I suppose that analogous to how hard it is for computers to do things like vision and human like memory. Now, however, I'm rambling so that's a wrap.


Outsourcing data storage
Thursday, August 4, 2005

I've worked on a lot of studies that required sending data (either 1 to N, N to 1 or N to M). Something like StrongSpace might be a good way to support studies that need secure but accessible data... Outsourcing storage. Seems it could beat the hassle of setting up and supporting this yourself (depending on your geek factor of course!).


Still more IT Conversations
Thursday, August 4, 2005

I've been catching up on old Pod Casts (I've been casting up on my old Pod Catches?).

  • Scott Mace talks about Eclipse with the Eclipse foundation's executive director Mike Milinkovich. It's big, it's active, it's cool. I wish Common Lisp had a tenth of this energy. I know that CLimacs is doing some neat things but it seems a shame that all this energy is being spent on Java. Sigh. Think of the humanity.
  • Moira Gunn talks with Patrick Lincoln about bioinformatics: moderately informative and interesting.
  • Larry Magrid talks about Internet Child Safety. He paints the positive picture that this will improve critical thinking skills. SInce this didn't happen with TV, or radio, or whatever I don't see how it's supposed to happen now.
  • George Dyson gives a funny and fascinating talk at ETech 2005 about the birth of the modern digital computer during World War II and at the Institute for Advanced Study.
  • Scott Mace talks with Peter Yared about web services and stuff. This sounded like a lot of oversimplification and marketing to me.

IT Conversations is a damn great service!


More IT Conversations
Wednesday, August 3, 2005


Quick review: Curio
Tuesday, August 2, 2005

I recently took another quick look at Curio. It's a very cool application that just doesn't quite fit my style or into which I can't quite fit my head or something. Curio's target market is graphic designers although they are trying to branch out. Their metaphor is great big sheets of paper that you stick anything on and then add links and annotations and what ever! It's a bit like OpenDoc (but not really). I think it could be used for mind mapping and design of all sorts of things. If you're a visual kind of person, I'd suggest taking it for a spin.


Apple's new mouse
Tuesday, August 2, 2005

Apple's new MightyMouse looks pretty cool. My only concern is "the two buttons hidden behind an apparent single button" feature. My guess is that it remains quickly learnable but it might confuse first time user's for a minute or two.

Update: The ars technica review makes it clear that Apple defaults to having both sides of the mouse act like left clicks. Thats smart and thats good.


Shadow of the Giant
Tuesday, August 2, 2005

Just finished Shadow of the Giant, the third book in Orson Scott Card's "Bean" saga. I missed the second one -- whoopts! -- so I had to mentally fill in a few gaps... It's a decent book. Card is a decent writer. The ideas and plot in this one were very good and the characters interesting and mostly believable. On the other hand, Card always seems a little preachy to me and the moral dilemmas many people rave about generally strike me as, well, silly. I recommend it to fans but not to people who would like to become fans!


IT Conversations
Monday, August 1, 2005

  • Frans Johansson talks about the Medici Effect: creativity, failure, rewards, etc. Interesting though the interview leaves me wondering exactly how deep this really is...
  • In his keynote at OSB Jonathan Schwartz talks about Open Source, Sun Microsystems, growing markets, corporate blogging and more. Pretty good stuff.
  • Philosopher Alva Noe talks about his book Action in Perception. I've been a long time fan of autopoiesis, Francisco Varela and the like so this sounds like great stuff to me and about time!

del.icio.us recommended tags
Saturday, July 30, 2005

When you edit one of your bookmark's tagset in del.icio.us, it shows you recommended tags and popular tags for the page. I haven't seen it documented but it looks as if the popular tags are based on what other taggers have used and the recommended tags are based on the intersection of other taggers tags and your own. What I'd like to see is a way to use the content of the page as part of the input to the tagging recommendations. What I'd really like is to do such a thing myself! No time. dammit, no time.


NSF Claims computers scientists are kids with bicycles
Saturday, July 30, 2005

Computer scientists are no different [than] Kids rac[ing] their bicycles, pedaling madly to move ever faster. Then they advance to sedans, but covet sports cars, still wanting to push that envelope of speed.

OK, so I took a few liberties with the context! I think it makes a better headline.

The rest of the story is worth reading from a historical perspective. I'm not sure why the NFS is talking about it today when it happened in 1996?


Leave me alone will you
Thursday, July 28, 2005

From CNet news:

The typical office worker is interrupted every three minutes by a phone call, e-mail, instant message or other distraction. The problem is that it takes about eight uninterrupted minutes for our brains to get into a really creative state.

I'm totally hip to this message but still find it hard to disconnect. There's always one more message to read or write, one more weblog or news article to read, etc. This is just another example of how technology keeps creating new problems as it solves old ones.

Even Bill Gates knows the value of disconnecting:

Bill Gates takes the time, twice a year, to read and ponder the future of Microsoft. How often do you take any time at all to read new ideas, consider your current work and life, and make changes? Not often enough, I'll bet.

This reminds me of a great book I thought that I had already written about but haven't. The books is: the Tyranny of the Moment by Thomas Eriksen and it is wonderful.


Smart Radios
Wednesday, July 27, 2005

Wireless communication works because the makers of the sender and the receiver agreed to use a particular standard. There are already dozens of standards and more keep coming; furthermore, standards keep evolving and it takes time to implement them in hardware. This reduces flexibility and time to market. Suppose that the standards could be implemented in software instead of hardware? Then your radios could handle multiple standards and be upgraded easily in the field. Then groups from different communities (e.g., fire fighters and police or army and navy) could merge their radios simply and not be frustrated by communication disasters while coping with a disaster. Put it all together and you have Smart Radio. The Economist had a nice overview article but now it's only available for a few. If you want to really go out on a limb, you can instead talk about 'Cognitive Radio':

... the ultimate smart radio would be aware of its surroundings, be able to adapt itself in response and learn from experience...

Sounds almost scary! One thing I haven't seen in this coverage is how secure such things would be. I'd hate to have my radio break or only allow me to listen to certain stations (with lots of ads) because of a virus infection.


Pet peeve of the day
Tuesday, July 26, 2005

People who send mail to broadcast by mistake and then send mail again apologizing... <sigh>


Steps towards world domination
Tuesday, July 26, 2005

It plays chess

The Macintosh version of Shredder performed very well and as far as I know this was the first time that a chess program running on an Apple Macintosh computer has won a major computer chess tournament. The Macintosh hardware has also proved that it is very competitive and fast.

It drives cars

"Dora", is the world's first fully autonomous vehicle driven by Mac OS X. The entire development and race management efforts at Team Banzai is being done using Apple Mac OS X technology.

What's next?


Bruce Schneier rips into Secure Flight
Tuesday, July 26, 2005

Security man Bruce "Beyond Fear" Schneier rips into problems the government's Secure Flight program is having obeying the law.

Secure Flight is a disaster in every way. The TSA has been operating with complete disregard for the law or Congress. It has lied to pretty much everyone.

Given this administration, why am I not surprised. I guess this opens the question: even if someone is watching the watchers, will they have any power to make changes?


This pattern should be avoidable
Monday, July 25, 2005

Newspaper (webpaper?) articles like this one about the apparent deficiencies in the regulation of hospitals leave me wondering why? Why, in all the years that this has been happening, haven't we managed to figure out this who will watch the watchers problem? Why can't we do better? The pattern is clear:

  • A group gets called on to manage, oversee, check up on, etc some other group.
  • Things start out reasonably well
  • Over time, the links between the watchers and the watched become stronger than the links between the watchers and the reason they exist; furthermore, financial incentives pull towards abuse
  • Things go the way of all flesh
  • Articles like the one in the Washington post appear...

I don't have a solution but I feel in my gut that there ought to be one and that decentralized technologies and machine learning play a role.


Great interview with Wil Shipley
Thursday, July 21, 2005

at DrunkenBlog. Delicious Library, the OmniGroup, philosophy, Shakespeare and more!


Useful writer's tips
Wednesday, July 20, 2005

By C.J. Cherryh:

Writerisms: overused and misused language. In more direct words: find 'em, root 'em out, and look at your prose without the underbrush.


Plotting and Lisp: clnuplot
Wednesday, July 20, 2005

I just ran across Ryan Adam’s work on Plotting in Lisp (parts one, two and three) on Lemonodor. Coincidently enough, it turns out that I started working on something similar a few months ago. I called in clnuplot. Which has, I think, a nice ring to it.

From Ryan’s posts, I think we’re approaching the problem from a similar angle. Here are a few notes from two months ago:

I’ve been slowly learning bits and pieces of GNUplot and writing Lisp code to generate data and command files for it. Today (17 May 2005) I spent a bit of time consolidating what I have into something that might be slightly more generally useful.

You can use GNUPlot by writing data files and running plots from the GNUPlot command line or by writing data files and command files and running those or by writing command files that have the data inline. What I’ve done is written some classes and functions that let you manipulate plots in Lisp and then write out a command file that can executed in GNUPlot.

The basic model is one of plots and data-sets. A plot contains information for the entire information display; for example, the title, the axis labels and so forth. Each data set contains information about how to display a single group of data in some format; e.g., the data, the display style, the name of the data in the legend and so forth. A plot contains one or data sets.

Interface

  • make-plot &rest args &key name comment filename plot

    If you do not supply a plot argument, this creates a new plot object that contains a single data set. If you do supply the plot argument, the data set and it’s information will be added to it.

  • For example, I have a command that first calls make-plot with not data: :

    (make-plot nil nil 
            :title "Error rate versus F-measure"
            :xlabel "Percent Mixing"
            :ylabel "F-Measure"))
    

    and then later uses the returned plot to build up a number of data-sets: :

    (make-plot :points data
            :legend (format nil "Negative ~,2F; Positive ~,2F" 
                                (getf key :fnr) (getf key :fpr))
            :x-coord (lambda (x) (getf x :pm))
            :y-coord (lambda (x) (getf x :f))
            :plot plot)
    

    The final plot object returned contains a whole bunch of data sets. Make-plot currently supports :line, :points and :bar styles. Plots can have titles, a label on the x-axis and the y-axis and custom labels for the legend. Much of the rest of the functionality of GNUplot is missing but the framework is in place to add it pretty easily (I thinkÔ I’ll be adding stuff as I need it). Look in the parameters *plot-plot-settings* and *plot-data-set-settings* to get a sense of what setting the plot code knows about.

  • write-plot plot destination

    This command writes the plot object to its file. Each plot object specifies a host, fullpath (directory) and filename. The host and directory default to *plot-default-host* and *plot-default-directory*. The filename will default to “plot”

    When you call write-plot it will return the pathname to which the file is written. The file can be executed in GNUPlot (either by piping it from the command line (note bene, I haven’t tried this yet myself) or by using the load command in GNUPlot). The plot commands and data will all be included in this single file.

I do most in Macintosh Common Lisp and use Alexander Repenning’s AppleScript support under OS X to call out to the shell — it seems creaky but works surprisingly well! Once I got the basics in place, I’ve not had much time to add the obvious features (or even to publicize my work until today’s goad came along (thank John)). In any case, I couldn’t find Ryan’s e-mail on his blog so if anyone can put us in touch, it would be great to move forward on this together.


Across the Nightingale Floor
Sunday, July 17, 2005

I came across this on a list of summer reading books for young teens and got it for my son. He devoured it so I thought I'd take a look. This is a wonderful book (maybe I'm still a young teen at heart!): mystery, secrets, love, a bit of magic, danger and conflicting loyalties. It's not a deep book the way some of Le Guin's best work can be. but what it does it does very, very well. Great fun!


Communication Boundaries in Networks
Thursday, July 14, 2005

Most of the systems we view as networks exist in part to communicate something from one vertex to another (the internet, World Wide Web, food networks, cell metabolism, phone networks, and so on). How well do they succeed in doing so? How easy, in other words, is it to send a message from one vertex to another and what factors influence the ease and speed of transmission. This paper quantifies these questions by defining the search information in going from vertex s to vertex t as the number of bits needed to describe the path. This is the sum of the log (base 2) of the degree of each vertex along the path (actually, you subtract one from the degree of each vertex except for the first, because you know that you're not going to backtrack). (If there are multiple shortest paths, then one takes the sum before applying the log). They then measure this for some real graph and for the corresponding random graph (which they define as one that has the same degree distribution and remains connected). This value, delta S = S-graph - S-random, measures how much more (or less) information we need to describe paths in our graph due to its topology. If delta S is positive, then we have longer descriptions; if negative, shorter ones. Interestingly, many real world networks optimize communication for paths of length around 2 or 3. They then go on to investigate modular, hierarchical and scale-free graphs as compared to random ones and find, for example, that hierarchies are not (necessarily) optimal for search.

[The fact] that (club) hierarchies are used in many human organizations may thus be seen as a way to regulate and thus limit the information exchange, rather than optimize it.

This points a way towards measuring information flow in real organizations and perhaps finding structures that enhance the actual goals of an organization rather than hinder them.

The paper continues with a discussion of how global information can aid search and how to stratify information so that the best properties of local and global search are achieved simultaneously. This "scale invariant" strategy selects directions "according to the average traffic to nodes at distances similar to that of the searched target node."

I found this paper very interesting. Its writing is clear, the content useful and the results non-trivial. The Nordic Institute for Theoretical Physics is doing some neat stuff.


Why no NaturallySpeaking for OS X?
Thursday, July 14, 2005

OS X has some built in speech recognition but the only third party application is IBM's ViaVoice and the OS X version is well behind the Windows version. From what I've heard, NaturallySpeaking for Windows is a killer application.


Understanding Terror Networks
Wednesday, July 13, 2005

Sageman's book is a compendium of militant islamic terrorist network history. Though detailed and informative, there is much more minutia here than there is meat. Indeed, I found it almost impossible to do more than very lightly skim the first four chapters (on the origins and evolution of the Jihad, the Mujahedin and on joining the Jihad).

The fifth chapter on social networks was more interesting but even it was weighed down in detail and seemed sketchy in its grasp of Social Network Analysis (SNA). For example, Sageman cites Barabasi's book Linked for the claim that Small world networks are resistant to random assaults but vulnerable to targeted attacks at their hubs. This is true of Scale-free networks (which are Small world) but not necessarily true of other Small world networks such as those of Watts and Strogatz. On the other hand, his analysis of the utility and function of embeddedness, cliques and weak links (in Granovetter's sense) seems spot on.

In summary, Understanding Terror Networks may be an excellent work for those interested in the history and motivations of islamic (and analogous) terror groups. It is not, however, particularly useful from a SNA or computer science perspective.


Development of sampling plans by using sequential selection
Monday, July 11, 2005

When doing any statistical study, we start with a population (sample space), take measurements and then go from there. For example, suppose we want to learn something about the population of movie fans that have seen the Fantastic Four. For the sake of this example, assume that we further want to take a stratified sample based on the row in which the fans were sitting (maybe people closer to the screen enjoyed the movie more). The obvious way to do this is to wait until everyone is seated and then look in the rows and make selections. This, however, is a lot of work and means that you have to interrupt the picture. This ancient (1962!) paper by Fan et. al. demonstrates numerous ways to sample sequentially (item by item) without waiting for the entire group of people to arrive. Some methods are clever statistical tricks made more practical by the advent of digital computers. Others and really quite marvelous. My favorite, for example, is to turn the question of which members of the population do we want into how many members should we skip between samples. This lets us select at random an expected number of items without replacement from the population even when we don't know up front how big the population is! I won't go into the math but I do think it's a very insight. I love how problems can become solvable when viewed in the correct light.


Properties of highly clustered networks
Monday, July 11, 2005

In this mathy paper, Newman presents a network model with tunable degree distributions and clustering coefficients. He analyzes the model to derive closed form solutions for the mean degree, percolation threshold (i.e., loosely and analogically speaking, this is the minimum infectiousness a disease must have to become an epidemic on the network), and the size of the giant component. Newman goes on to analyze epidemics in more detail using networks with both Poisson and power-law degree distributions. He finds that increased clustering decreases the total size of an epidemic but also decreases the epidemic threshold. In particular, no amount of clustering will produce a non-zero epidemic threshold in power-law degree distribution networks.

Newman's model is simple: start with a bipartite graph of groups and individuals, project this down onto the individual graph only and connect individuals with probability p. Newman's insight (at least, I think it's his insight but I'm not a condensed matter physicist (even though I am made of condensed matter)) is that we can view this process as bond percolation. The rest, as they say, is math.

This is an interesting paper though challenging. I'm still digesting it but if nothing else, it's another view of network models and their connections to physical processes.


Describing my interests in < 250 characters
Monday, July 11, 2005

Accelerating Times newsletter application requests that you describe yourself, your passions, etc. as part of the sign up process. I didn't give it a lot of thought but like what I came up with:

Combining computers and appropriate technologies for the rest of us. Specifically, applying AI, HCI and design to technology. Seeking wisdom.


"We're all in this together" - Harry Tuttle
Sunday, July 10, 2005

I just re-watched the movie Brazil for the first time in about ten years. It is an incredible movie and it holds up so well that you would never guess that it was 20-years old!

I remember when I first saw it in 1985 at a little townie theatre in Springfield, MA. The ticket seller warned me about the movie and tried to talk me out of going! Apparently other unhappy Springfieldites had complained! I was not deterred, however, and loved the movie then and love it now. The absurdity, the banality, the mix of past and future, of the normal, the bizarre and the bizarre parading as normal are so fine, so tuned.

If you haven't seen it, I'd recommend getting out to see it now, today! If you have but not recently, then I'd allow you to wait until tomorrow <smile>. It is pure genius.


ITConversations has John Smart
Sunday, July 10, 2005

ITConversations has John Smart from Accelerating Change 2004: Inspiring, amazing, tons of fun. Probably worth a bit more skepticism though!

Earth is pregnant with possibility.

...

The dominant word is surprise!

...

You're never going to be as good looking as you are today and things will never be as slow and simple as they are today.

Recommended.


Patch for QuickSilver's pause script for iTunes
Saturday, July 9, 2005

Though I used to use (and love) LaunchBar, I'm now an bigger fan of QuickSilver. I've modified the pause script so that it's now a toggle rather than only a pause (a behavior that I think is more useful and more sensible...). Being a bear of occasionally very little brain, I haven't figured out how to get this back to Blacktree yet so here it is on my blog (and here's a link for download)!

tell application "System Events" to 
     if (application processes whose name is "iTunes") is not {} then 
	tell application "iTunes"
		if player state = playing then
			pause
		else
			play
		end if
	end tell

(I just realized that QuickSilver's already included a play-pause script... oh well).

(update 11 July 2005)

An alert reader points out that iTunes already has a playpause command!

<pre>

tell application "iTunes" to playpause

</pre>

Oh well. My AppleScripting abilities are quite lame. I can read the damn stuff but writing it almost always leave me stuck between the data model, the syntax and the available commands. I was completely happy to be able to create this little ditty and I'm going to hold on to that happiness for dear life, dammit <big smile>.


Florida continues to be crazy
Friday, July 8, 2005

Arresting someone for accessing an unsecured wireless access point? What's next, a law making it illegal to read over someone's shoulder?


The nature of meaning in the age of Google
Friday, July 8, 2005

Google is continually finding new uses for its vast database of hyperlinked text: spelling, mapping, definitions and so on. Terrance Brooks (who has, I assume, no relation to fantasy author Terry Brooks!) points out that Google makes use of lay indexing (i.e., folksonomies) to produce aggregations with semantic content -- meanings! Similar lay indexing lies behinds Amazon's book suggestions, Flickr's photo-sharing and del.icio.us's bookmark collections. In all these cases, however, a tension develops between the aggregator's algorithmic strategies and user's attempts to exploit that the strategies: spam. Each publisher would like to push Google towards her contents but Google only functions well when it can exploit the wisdom of crowds - i.e., when control is based on diversity. All of these system, then, are fundamentally social and can function when most people play "by the rules" either because they want to or because they have no other choice. Brooks says:

The culture of lay indexing is one of mistrust and ignorance: the lay indexer's ignorance of when, if, and how her work will be used, and Google's mistrust of lay indexers, whom it must assume are constantly scheming to gain an advantage over the Googlebot.

...

Struggling to maintain the ignorance of layindexers in the culture of layindexing contrasts sharply with the historical treatment of indexers. During the last several hundred years in the craft of book arts and scholarly journals, indexers have been honoured and respected. In this legacy culture of indexing, indexer ignorance was an anathema to be avoided, not enhanced.

The internet is new because it is 'open': anyone (in the technological world) can author anything and declare its meaning. It is:

a lawless meaning space... a novelty that most traditional meaning technologies have not anticipated. Being able to operate successfully in a lawless meaning space is, however, the key success criterion for legacy meaning technologies that are applied to Web space.

Most formal systems (e.g., Dublin Core, RDF, meta tags) ignore this dictum and are therefore ignored by Google!

Since, like most people, I'm the sort of person who usually does the right thing, I think that Brook's paper provides an interesting perspective on lay indexing, Google and the differing strategies they adapt. It would be fun to pull in the whole evolutionary games perspective (perhaps someone already has?!).


Tremendous backlog
Friday, July 8, 2005

I have this tremendous backlog of computer science papers that I've told myself I need to summarize for this blog... I mentioned this problem to a friend and he said, "just mention the top one or two things for each one or you'll never get it done." I thought, "sage advice."

Thus begins a rapid (I hope!) set of paper summaries...


Political Suicide
Tuesday, July 5, 2005

This was the first Robert Barnard mystery I'd come across and it made a wonderful introduction. I am admittedly something of an Anglophone and Barnard's books are steeped deeply in the complex social, class and caste issues that permeate British society. Political Suicide takes these as base and stirs in party politics, dirty tricks and even environmentalism. I found it wonderfully funny and equally intriguing.


At Death's Door
Tuesday, July 5, 2005

At Death's Door by Robert Barnard is an enjoyable yarn about relationships, family, trust, hatred, and loathing. It feels typically British in its details and sensibility and the writing and character are superb. To be fair, it is a bit slow going at times -- there is not always all that much there there -- and some of the twists seem gratuitous to the core of the plot. Still, this is a mystery worth reading for its characterizations and its dialog. Recommended.


Google quick reference
Thursday, June 30, 2005

Very handy! I didn't know Google could do all that.


Sideways
Sunday, June 26, 2005

I still haven't seen the movie but can now honestly recommend the book. Sideways is an enjoyable, somewhat over-sexed romp through mid-live crises, friendship, love and wine. Lots of wine. Oh boy, a whole lot of wine!

In the end, I didn't find the novel all that believable -- then again, I live in staid Massachusetts, not rip roaring LA -- but very fun and satisfyingly true.


Enhanced hypertext categorization using hyperlinks
Thursday, June 23, 2005

Intuitively, knowing that this thing is related to that thing ought to help me understand both of them better. Ah, but how to put that intuition into practice? Chakrabarti et. al. present one of the early set of answers. The paper is a wealth of ideas that have been mined by many others in recent years.

Suppose I'm trying to classify a set of objects X into categories or classes. Can linking between objects help? Thinking about the automatic classification of papers into topics or patents into categories provides the seeds of hope and cause for doubt. Papers I link to (or that link to me) should contain information about my class. Ah, but they may also contain much that is irrelevant. Indeed, the authors found that trying to use all of the text in linked papers produced no improvement and often makes things worse. So wow can I use the right information and ignore the bad stuff? How do I fix the "noisy neighbor" problem? One answer is that instead of using the text in the links, we instead use the class of the links. Our classifier then has an input the local text plus class labels on some portion of the papers linked to it. Does this help? Yes. It helps a lot.

Now we can move onto a more realistic problem. Instead of classifying one paper, I got bunches of them plus their interlinks. Some have labels, some don't. How should I classify them? One a time via some sequential decision process (and what would the best order be to ask the questions?)? All at once? How? Do I use only adjacent links or should I travel out further to ask questions about my neighbor's neighbors? Chakrabarti shows that we can borrow techniques from image processing (relaxation labeled) to co-classify everything simultaneously and iteratively. I.e., first take the information you have and make your best guesses (in a maximum likelihood or Bayesian sense) to update all the class labels. Then, do it again. With a bit of math and a network that exhibits homophilly (the love of Philadelphia), this will converge to a stable and remarkably accurate answer. It will do much better than the best text alone ever could and it will do well even if most of the class labels are not initially known; even if all of the class labels are unknown! The relational structure of the network leads towards a co-consistent labeling.

In sum, this is an well written paper with a wealth of ideas drawn together from information retrieval, machine learning, data mining, computer vision and statistics. Very good stuff.


the Wailing Wind
Thursday, June 23, 2005

Though I think his best Navajo work is behind him, The Wailing Wind is another enjoyable yarn from Tony Hillerman. The themes, characters, landscape and even the plot feel somewhat recycled but its still good for a few hours of escape and worth it for the occasional lyrical descriptions or the chance peering into the heart of what makes us human.


the Bat Tattoo
Thursday, June 23, 2005

The Bat Tattoo is am amazing book for those who are made a bit unsure by the nothingness lurking in the interstices of the world, between two lovers, under the bed, in the eyes of your dog. Hoban has a turn of phrase that is all his own and yet reminds me of Walker Percy. Suddenly struck, I stop and reread, wondering at the power behind such thoughts, behiind marks on paper.

Zion is what you think there's no end of when you have it, then all of a sudden it's gone and there wasn't really that much of it.

...

Is it a sign of growing old when the faces coming towards you in the street are full of stories that you don't want to know?

The book concerns the interlocking relationships between Roswell Clark, Sarah Varley, Adelbert Delarue (also R. Albert Streeter), Jesus the man, Jesus the concept, art (real, modern, inane), crash test dummies, ancient Chinese bats, and the reader (though not in so many words).

"What I want," he said, "is you in various attitudes of listening: standing, sitting, lying down -- as many different ones as you can think of?"

"Listening?"

"Listening."

"For what?"

"That we don't know yet," he said, "It could take years."

Kafka said that we don't need books that make us happy. Rather, "we need book like ice picks, to break the frozen seas in us." Hoban's book is a rare one that battered my frozen seas, found me wrecked on a shore I've seen before, and left me happy.


Synchronization of Periodic Routing Messages
Tuesday, June 21, 2005

This is an old (1994!) paper with a really neat result:

Network architects usually assume that since the sources of this periodic traffic are independent, the resulting traffic will be independent and uncorrelated.

...

This paper argues that the architect's intuition that independent sources give rise to uncorrelated aggregate traffic is simply wrong and should be replaced by expectations more in line with observed reality.

...

This research suggests that a complex coupled system like a modern computer network evolves to a state of order and synchronization if left to itself. Where synchronization does harm, as in the case of highly correlated, bursty routing traffic, it is up to network and protocol designers to engineer out the order that nature tries to put in.

The results are sort of old hat to network and synchronization aficionados but they still seem wonderfully counterintuitive. I've not read the entire paper but the introduction and conclusion are clear and well written. Good stuff.


I think we need a generational garbage collector
Monday, June 20, 2005

Think Progress has some nice historical references for the US administrations claims for a short war juxtaposed with Secretary Rice's comments that this will be a generational commitment. Bush did say that the war on terror would be generational, not the war in Iraq.

(Sorry for the politics here on unCLog. I just liked the pun too much!)


Why change-class?
Thursday, June 16, 2005

(updated 21 June 2005)

Common Lisp lets you change the class of things on the fly and in fact has a whole protocol for dealing with appearing and disappearing slots, updating existing instances and so forth. It's really quite amazing and very handy for prototyping and ad hoc experimentation. My only complaint is that changing the class of an object takes way too much typing! You need to

(change-class my-object (find-class 'name-of-class))

This seems silly to me. Why not make class changing as simple as setting any other sort of value? Why not make it seem like Common Lisp by using setf? Why not make it more flexible? So, since I couldn't think of any not to, I wrote this:


(defgeneric (setf class) (class object)
  (:documentation "")
  (:method ((class symbol) (object standard-object))
           (change-class object (find-class class)))
  (:method ((class standard-object) (object standard-object))
           (setf (class object) (class-of class)))
  (:method ((class standard-class) (object standard-object))
           (change-class object class)))

This lets me do things like

(setf (class my-object) 'a-class-name)

and that's easier to type, looks like CL, and is even clearer (in my opinion). Any dissenting voices are welcome to comment!

Comments

Well, it looks like I've put my foot in it! Not only can you already say things like

(change-class my-object 'a-class-name)

but redefining a built in symbol of the Common Lisp package like class is -- of course -- a no-no. My implementation also ignores the fact that you might want to add initargs to the change-class form (though that's easy to fix). On the other hand, I still think that using setf to change classes is a good idea. My general philosophy of change in Lisp is that if you can use setf, then you should. In any case, thanks for the feedback.


You know what's weird
Tuesday, June 14, 2005

I'll tell you what's weird. For the last 6-months or so, OS X's Script Editor has refused to accept backspaces or returns. To make a new line, I need to type control+Return. To delete text, I need to select and type over it. It all works... and I have no idea when or why this began happening. Thank god that computers aren't in charge of anything critical like cars, or air traffic control. Hmmm.


Favorite comment of the week
Monday, June 13, 2005

This is in the source of a well known Common Lisp:

; This has to be defined fairly early (assuming, of course, that it "has" to be defined at all ...

I can definitely relate!


Sean Carroll on Bio-Tech nation
Monday, June 13, 2005

Moira Gunn talks with Sean Carroll (yes, that's two n's, two r's and two l's <smile>) on Bio-tech nation. The interview is mostly to promote his new book Endless Forms Most Beautiful: The New Science of Evo Devo and the Making of the Animal Kingdom but he has a wonderful response to the partisan nuttiness of creationists. It's worth hearing for that alone.


the Digital Person: Orwell or Kafka
Thursday, June 9, 2005

I recently listened to another excellent IT Conversations interview on my iPod mini. This one was with Dan Solove, an associate profess or law at George Washington University. He argues in his book the Digital Person that it's not Big Brother we should be worried about, it's the crazed bureaucracy of Kafka's the Trial (Terry Gilliam's Brazil should also come immediately to mind). At issue is the structures we have set up to control access to our information -- which are weak at best. Unfortunately, no one with power really cares or understands. So once again we have to ask, we are we going to do about it?


Doug Engelbart
Monday, June 6, 2005

Naive enough to decide that his goal would be to find a career that

that will maximize the contribution my career can make to mankind.

Smart enough to succeed!

Doug Engelbart's IT Conversations presentation on Large Scale Collective IQ from Accelerating change, 2004 is a wonderful view of a mind that has wrestled constantly with the fundamental challenge of our time:

Mankind is not getting smarter at anything like the rate that complexity is accelerating

so the only hope is to find ways to improve our collective intelligence! Humble, funny, and (dare I say it) wise, Engelbart's words are worth hearing and his ideas need to find incarnation.


Alex Steffen and Bruce Sterling solve the world's problems
Thursday, June 2, 2005

Alex Steffen and Bruce Sterling's 2005 keynote presentations from the South by Southwest Interactive Festival from IT Conversations is fun, informative and frustrating. On the plus side, they talk about many of the real problems facing our species (over-population, fresh water supply, de-forestation, economic imbalances, and so forth). On the down side, their presentation is pollyannaish and views technology as too much of a panacea. I'm a pessimist because technology is not the problem or the solution. Technology takes and it gives. It solves and creates new problems simultaneously often in ways that are not clear until years in the future. In the end, its people that have created the problems collectively and people will need solve them collectively.

Trouble is, I don't see any large scale efforts to educate people so that their behavior is likely to produce good effects in short and long runs. Indeed, doing so goes against much of the free market, consumerism is good, ethos of the (so called) first world. Hmmm, maybe I should have put this on polliblog!


Nice response to Joel Spolsky dislike of exceptions
Thursday, June 2, 2005

Joel Spolsky (of Joel on Software) doesn't like exceptions. He paints an oily feeling picture of why in a recent essay. Christian Lynbech explains why Joel is wrong. End of story. Oh, best quote: "Exceptions are a fact of life, deal with it."


Much better than "OK"
Friday, May 27, 2005

You've probably read rants about error messages dialogs and that damn "OK" button. No, it's no OK. I'm trying to do something and you're not letting me. Here's a dialog that fatalistic, but much closer to reality.

From the fairly wonderful i-installer.


A measure of betweenness centrality based on random walks
Friday, May 27, 2005

Network analysis is big business now a days because, suddenly, we see networks everywhere we look (the internet, power grid, food web, social network, genome to name a few). Mark Newman is one of the physicists who has brought serious mathematical chops to their analysis. In this paper, he investigates a new way to determine how "important" a vertex is to its network, it's centrality. The intuition for most centrality measures is obvious: a vertex is more central the more other vertexes depend on it to reach the rest of the network. Measuring it requires forming this intuition into an algorithm that provides it with precise meaning. So, for example, degree measures how many connections a vertex has, closeness the average shortest path distance between a vertex and every other vertex, and betweenness centrality is a measure of how many paths a vertex between other vertexes a vertex is on.

Different variants of betweenness centrality exist because there are different ways of determining paths. At one extreme, we can look at all and only the shortest paths between vertexes (shortest path betweenness); somewhere in the middle, we can look at flow betweenness: look at every path but count longer ones less. Both of these cases presuppose some measure of intelligence (so to speak) on whatever is choosing the paths. In some cases, this is very reasonable (news transmission comes to mind); in others, it is not (the spread of infectious disease). In response to this, Newman defines random-walk betweenness: make lots of random walks between all pairs of vertexes on a network and measure how often these pass through a given vertex. One surprise is that you can compute this without making all those random walks (he gives an O((m+n)n2) algorithm involving matrix inversion).

Newman goes on to compare this new measure with the existing ones on several different graphs: a few designed to highlight deficiencies in the existing measures, a graph is sexually transmitted disease in Colorado and a graph depicting family relationships in 15th century Florence (I told you networks were everywhere!).

Newman's writing is always concise and clear. His math is hard but not insurmountable. This is a paper well worth reading and the random-walk betweenness is definitely a keeper.


William Wulf
Tuesday, May 24, 2005

Willam A. Wulf, Ph.D., President, National Academy of Engineering speaks to congress regarding the Federal government's support for computer science research. It's a good read. I hope someone listens.


How have we survived this long
Tuesday, May 24, 2005

Without being able to pet chickens electronically:

Researchers have developed a cybernetic system to allow physical interaction over the internet. The system allows touching and feeling of animals or other humans in real time, but it's first being tried out on -- chickens.

The mind boggles.


simple, complex, reliable?
Tuesday, May 24, 2005

From Joel on Software:

The way to write really reliable code is to try to use simple tools that take into account typical human frailty, not complex tools with hidden side effects and leaky abstractions that assume an infallible programmer.

Joel uses this as an argument not to use, for example, objects, macros, AOP, etc.

On the other hand, Grady Booch says:

In the presence of essential complexity, establishing simplicity in one part of a system requires trading off complexity in another.

They're both right. (And no, I'm not to going to quote Einstein and his simple but no simpler thang). Anyone that mucked with CLOS has found themselves in the method combination now-what-methods-exactly-are-getting-called morass. That's not, I think, an argument against method combination. It may be an argument for establishing conventions and for not using method-combination willy nilly (the every problem a nail syndrome).

We can't make complexity go away. The goal is to find the right abstractions (even when they leak) so that complexity is managed. This takes skill and time and training (more than 21 days!) and I'm pretty sure that complete avoidance is a bad idea.


Suspicion scoring based on guilt-by-association, collective inference and focused data access
Monday, May 23, 2005

This is a short paper with a long title! Traditional machine learning based classification works with instances -- think of rows in a spreadsheet. The goal is to take training instances and produce a rule or set of rules that will probably classify future instances. This paper, however, is not traditional. It is one of a recent (i.e., within the last 5 to 10 years) crop of papers that understand that instances are related to one another. Guilt-by-association is not an instance based classifier. It is a relational one. My guilt depends on the guilt of the people I know and their guilt depends on mine. Relational classifiers are like Google's page rank or Kleinberg's Hubs and Authorities: beautifully recursive.

Collective Inference -- the simultaneous analysis of multiple related instances -- is another new term (popularized by David Jensen among others). Perhaps surprisingly, collective inference can often do better than a more traditional, one step at a time approach.

Macskassy and Provost's paper combines relational classification, collective inference and the focused and dynamic acquisition of new data. They present their system and the tools behind it; show it working on multiple simulator created data sets and analyze the results. The most interesting bit is one of the things that doesn't happen: they found that adding additional profiling data (i.e., adding more data about how suspicious instances are) did not help the classification algorithm in general. Instead, the known labels essentially washed the extra data away. If a result like this can be understood we'll have a better sense of when -- and why -- profiling does -- and doesn't work. That would be something very worth having!


Adding to your LaTeX search path
Tuesday, May 17, 2005

This falls in the category of "should-have-been-simple-but-took-me-way-too-long". I was trying to become a bit more organized in my LaTeXing and, well, here's the whole story.

I use TexShop on OS X with the wonderful teTex distribution underneath. I usually use pdftex and dislike having lots of figures in the same folder as my LaTeX documents -- it's messy. I also dislike having to include the path to the figures in the includegraphics commands. The solution seemed obvious: extend the LaTeX search path!

I spent an hour (or two) reading various configuration files and Tex documentation. It seemed like adding something like TEXINPUTS.latex = .;./figures;$TEXMF ... to a configuration file somewhere would do the trick. But it didn't. I also tried modifying the TEXINPUTS shell variable (which eventually turned out to be the trick) but didn't do it correctly. Finally, I re-read this page and realized that a trailing colon was the key to my dilemma. Another trip to the property editor, log out and back in, and Bob is finally my uncle.

Tex is cool and massive; endlessly configurable and, hence, endlessly frustrating. In case, I have improved my setup and I have learned a bit more about how the whole Tex / LaTeX / shebang works. Perhaps it was time well spent?! Perhaps.


Books
Friday, May 13, 2005

Work has been driving me a bit batty lately but I've still found some time to read and re-read. Someday, I hope to have time, to write about the code I'm working on: nothing too crazy or wild, but some good stuff none the less.


Ender's Shadow
Friday, May 13, 2005

I read Ender's Game about a million years ago when I was in middle school. I loved it. I went on to many other Orson Scott Card books but eventually found them wanting. They were not exactly formulaic but they lacked the humanity of Ursula K. Le Guin's work and sometimes just didn't seem worth finishing.

Ender's Shadow -- a retelling of Ender's Game from the perspective of another character -- is a fun book. It adds depth and class while retaining the series essential character. I enjoyed it. On the other hand, it didn't make me want to search out Card again.


Beyond Fear
Friday, May 13, 2005

Bruce Scheiner has written a classic: cogent, lucid and clear. He puts in words many of my inchoate thoughts and explains the trade offs of security and -- more importantly -- how to evaluate them. You may disagree with the particulars of his argument but I think his framework is a keeper. This should be required reading for every adult in America.


the Dispossessed
Friday, May 13, 2005

This has to be one of Ursula "dont' forget the" K. Le Guin's best novels. Sensitive, moving, thoughtful with all of the air of depth and dirt of a real world's populated by real people. I've read this numerous times and am always struck by the ideas and the pain. Exquisite.


CSS Cheat sheet
Friday, May 6, 2005

Created by Dave Child and linked by me to you via Daring Fireball via Dave Shea. Ain't the web grand.


New quotes
Sunday, May 1, 2005

New quotes from Cornell West, Pablo Neruda and Leslie Boyer (sorry for the previously broken links).


The Great Influenza
Sunday, May 1, 2005

The American flu vaccine brouhaha and worries regarding the Asian bird flu have produced a small industry of books about the great flu epidemic of 1918. I recently finished John Barry's excellent work (though this, this and this look interesting too). It's a wonderful book that traces the history of Western (allopathic) medicine through the ages, outlines influenza's amazing tricks, renders personable the scientists, doctors and volunteers who fought to understand, control and treat the disease and also paints a picture of America in a time of change and political unrest. I thought that the current Bush administration's penchant for secrecy, word twisting (OK, lying) and smearing was unequaled but I now believe that life was actually much worse under Wilson. Back then, of course, they had a real war to fight and everyone was called upon to do their patriotic duty (and if you didn't, you might find yourself in jail, ostracized or worse). Complaints and criticism were labeled unpatriotic (doing the Kaiser's work) and it wasn't pretty.

But the politics in the book is only a sideline used to help explain how the disease spread in America and from American via the tremendous mobilization and dissemination of the armed forces. It was an awful illness. It killed an uncounted number of people and was usually worse amongst those between 20 and 40 -- the ones with the strongest immune systems. These people would die because the strength of their own immune response was so powerful that they literally ripped their own lungs to shreds. It killed so many that whole communities were left bereft. The death rate was high in America and Europe and even higher in China, India and other nations.

Even in this age of AIDS and jet plane borne exotic diseases, we've forgotten how illness can strike us low. It's ironic that the true victors in H. G. Wells War of the Worlds are not humans -- we cannot defeat the martians -- but microbes. The Great Influenza reminds us that medicine cannot cure all and that the cry of the epidemiologist in the wilderness is one worth hearing.


Flickr fun
Thursday, April 28, 2005

What can you do with a large semantically tagged database and other related metadata? Cool stuff!

This was mentioned over on Edward Tufte's forum.


Big databases and small ideas
Wednesday, April 27, 2005

Here's another child of the PATRIOT act (via Ars Technica via Wired News).

... thanks to the PATRIOT act, banks are spending billions on highly sophisticated, government-mandated anti-money laundering (AML) software that will track every last transaction of every last customer in order to build up individual customer profiles and look for "suspicious" activity. And when they find some suspicious activity, they're going to want an explanation out of you, regardless of whether or not you fit any sort of terrorist profile

To add insult to injury, the government should make this into a screen saver... something like Tax Evasion at Home. Then you're own computer could do the figuring, report you and fine you automatically. Sweet.


Harper's
Tuesday, April 26, 2005

SPAM: Thanks to people like this, it's working:

An American businessman spent $802,600 over the Internet to buy a house in India; when he arrived in New Delhi, he found that the house he was promised was actually the Prime Minister's residence.

From Harper's weekly.


Updates, shmupdates
Tuesday, April 26, 2005

I like software updates. I've been doing this computer stuff a long time but I still feel a bit like a kid in a candy store when I hear about new features or new tools or new widgets that purport to improve my productively, enhance my desktop esthetics or be fancifully fun. Why, however, must I go to some website, download some software, expand it, mount it, copy it / install it, unmount it, and trash it manually! Apple's software update gets it right: all the work happens behind the scenes. If Apple can do this at the OS level (yes, with the occasional restart required) it should be easier for applications to handle this stuff.

I want someone to write a tool / Cocoa component that automates this manual drudgery as much as possible: check the web, ask for permission, download, quit, install, restart, etc. Then I want everyone to use the component. Then I want a manager that checks everything automatically once a week the way software update does -- even better, I want it integrated with software update -- and lets me check the updates I want and skip the ones I don't.

Unfortunately, the someone isn't likely to be me: I don't know Cocoa well and I don't have the time. If you write it though, I'll promise to kiss your feet or lick stamps for you and buy you a couple of beers.

Hey, it's 2005. We don't have jet packs and personal rocket ships but at least we could be freed from update hell!


Perfection, Performance
Monday, April 25, 2005

I'm pretty sure I've seen this before, but Robert Strandh has a nice answer to a perennial question:

But why do people deliberately waste time when there are much more efficient ways of working? This is a very good question. In fact, it is such a good question that I decided to ask a good friend of mine, Lisa Feldman Barrett, who is professor of psychology at one of the top universities on the east coast of the USA. What she told me was ... that (with respect to this phenomenon) people can be roughly divided into two categories that she called perfection-oriented and performance-oriented.

[P]erfection-oriented [people]have a natural intellectual curiosity. They are constantly searching for better ways of doing things, new methods, new tools. They search for perfection, but they take pleasure in the search itself, knowing perfectly well that perfection can not be accomplished. To the people in this category, failure is a normal part of the strive for perfection. ...

[P]erformance-oriented [people] on the contrary, do not at all strive for perfection. Instead they have a need to achieve performance immediately. Such performance leaves no time for intellectual curiosity. Instead, techniques already known to them must be applied to solve problems. To these people, failure is a disaster whose sole feature is to harm instant performance ...

As Strandh points out, people can be oriented differently in different areas of their lives.

I can think of several people whose behavior this generalization helps explain and as Yoda said, "Explaining leads to understanding. Understanding leads to compassion and compassion leads to hope."


Amazon Associates R Us
Wednesday, April 20, 2005

I finally got around to signing up for Amazon Associates a month or two ago and today I finally got around to updating (some parts of) unCLog to make use of it. I expect everybody (I'd say anybody but that would just expose my insecurities!) reading unCLog knows all about this already: if you buy a book by clicking on one of my links to Amazon, then I become richer than god. Really. Expect to see some other minor site updates soon (but see the first sentence to understand what I mean by soon). Thanks.


Food Fight
Tuesday, April 19, 2005

In what feels like an interesting twist, the United Nations World Food Program has release Food Force:

A major crisis has developed in the Indian Ocean, on the island of Sheylan. We're sending in a new team to step up the World Food Programme's presence there and help feed millions of hungry people.

An aircraft circles over a crisis zone. War. Drought. People are hungry. This is the virtual world of the Food Force video game.

It represents too many parts our real world, where 852 million people lack enough food to eat and World Food Programme teams deliver food aid using not only airplanes but a fleet of ships and thousands of trucks.

Play the Food Force game, learn about food aid, and help us work towards a world without hunger.

I haven't played the game and I wonder if this will help people feel compassion for the suffering. I hope so.


Clay Shirky gives the semantic web what for
Tuesday, April 19, 2005

Clay Shirky has a great article on the chief problem facing the semantic web: logic isn't all it's cracked up to be. In particular, humans don't spend much time with syllogisms.

The people working on the Semantic Web greatly overestimate the value of deductive reasoning (a persistent theme in Artificial Intelligence projects generally.) The great popularizer of this error was Arthur Conan Doyle, ... Doyle has convinced generations of readers that what seriously smart people do when they think is to arrive at inevitable conclusions by linking antecedent facts. As Holmes famously put it "when you have eliminated the impossible, whatever remains, however improbable, must be the truth."

...

This sentiment is attractive precisely because it describes a world simpler than our own. In the real world, we are usually operating with partial, inconclusive or context-sensitive information. When we have to make a decision based on this information, we guess, extrapolate, intuit, we do what we did last time, we do what we think our friends would do or what Jesus or Joan Jett would have done, we do all of those things and more, but we almost never use actual deductive logic.

Indeed, computer systems that enforce simple rules (or even complex rules!) on messy human situations either fail to meet human needs or are subverted.

[W]hen we see attempts to enforce semantics on human situations, it ends up debasing the semantics, rather then making the connection more informative. Social networking services like Friendster and LinkedIn assume that people will treat links to one another as external signals of deep association, so that the social mesh as represented by the software will be an accurate model of the real world. In fact, the concept of friend, or even the type and depth of connection required to say you know someone, is quite slippery, and as a result, links between people on Friendster have been drained of much of their intended meaning. Trying to express implicit and fuzzy relationships in ways that are explicit and sharp doesn't clarify the meaning, it destroys it.

I think this explains the power and explosive ferment of folksonomies: they let people do what people want to do without getting in the way. As I said before, I think that machines ought to be able to help make folksonomies better without destroying what makes them special. But the right order to do things is to support organic growth always and worry about cleanup after the system begins to get strong (a gardening metaphor?).

Much of the proposed value of the Semantic Web is coming, but it is not coming because of the Semantic Web. The amount of meta-data we generate is increasing dramatically, and it is being exposed for consumption by machines as well as, or instead of, people. But it is being designed a bit at a time, out of self-interest and without regard for global ontology. It is also being adopted piecemeal, and it will bring with it with all the incompatibilities and complexities that implies. There are significant disadvantages to this process relative to the shining vision of the Semantic Web, but the big advantage of this bottom-up design and adoption is that it is actually working now.


Will Wright at Accelerating Change
Tuesday, April 19, 2005

Will Wright (SimCity, the Sims) talks about "Sculpting Possibility Space" at Accelerating Change 2004 held at Stanford University, November 5-7, 2004. Wright is a good and informal speaker. He covers a lot of territory at a superficial level and leaves one excited at the possibilities. The bits I found most interesting in this talk were the appeal of bringing players onto the development team as content creators (distributed creation), the social aspects of game play and community building, and finding ways to make rapid use of the data collected from players to sculpt the game, possibly in real time. Recommended.


ITConversations on the quick
Saturday, April 16, 2005

I've been doing a lot of listening lately. Here is a quick list of a bunch of it. I didn't hear anything that knocked my socks off but there is a lot of good stuff here to make driving more fun:

Phil Windley interviews Kent Seamons from 24 February 2005. It's an interesting talk about digital identity and digital certificates. Good.

Moira Gunn speaks with:


Dan Bricklin and Mikhail Gorbachev
Thursday, April 14, 2005

Wow! Dan Bricklin gets a hug from Mikhail Gorbachev. He has a very fun to read and thought provoking piece. A quote from Dan's notes on Gorbachev's speech:

At the first Communist Party Congress that he held in 1986 he said for the first time as the head of the party that we are living in an interdependent and interrelated world. The interconnected world is a reality. A new world with new relationships, new kinds of exchanges. This was a right conclusion. Many people thought it was rhetoric, but today we continue to live in this interrelated world. How do we make this world livable? For everyone? This is a world of stress and poverty for over half the population. We cannot allow the world to continue like this. This is a delayed reaction bomb, the roots of terrorism, epidemics, etc. We must do a lot of thinking about this. If information technologies just work for the benefit of developed countries, while much of the rest of the world continues to live in a pre-industrial era, this is not the way to go. We need more justice, more humanism.


Rooter! (with thanks to Brian Mastenbrook)
Thursday, April 14, 2005

Perhaps too funny for words (make sure to read the response to the query regarding reviews)! Thanks Brian, I needed a laugh today.


Amazon's SIPs
Sunday, April 10, 2005

It's too geeky by a half, but Amazon's new Statistically Improbable Phrases (SIPs) seem like an interesting idea:

Amazon.com's Statistically Improbable Phrases, or "SIPs", show you the interesting, distinctive, or unlikely phrases that occur in the text of books in Search Inside the Book. Our computers scan the text of all books in the Search Inside program. If they find a phrase that occurs a large number of times in a particular book relative to how many times it occurs across all Search Inside books, that phrase is a SIP in that book.

I like what having lots of data can do for you. This ability to leverage off of the collective outcome of normal behaviors reminds me somehow of the social insects like bees and ants. We each do what we do and a pattern emerges.


Recent ITConversations
Thursday, April 7, 2005

From the Mac OS X 2004 conference, David Pogue has an amusing talk about interfaces: the good, the bad and the future. Like most talks of this nature, it focuses more on railing against current imperfections than on what would improve things. Fun if you have time but not certainly not required.

PeterNorvig (Google), Richard Rashid (Microsoft) and Jim Spohrer (IBM Research) share a panel discussing the new stuff coming out of their labs. Norvig discusses how machine translation changes when you've got the world's biggest database of text; Rashid highlights the Terraserver, Skyserver (very cool) and Microsoft's push to make scientific data available to the masses; finally, Spohrer talks about services and how IBM is trying to create a new discipline: service science. Recommended.

From Accelerating Change 2004, Gordon Bell from Microsoft talks about MyLifeBits and the Memex Vision. This is yet another "hey, we've got all these hard disks, let's fill 'em up with stuff from my life." Maybe I just don't get it, but I don't want to store my life on a hard drive nor do I see much value to be gained from doing so. We need tools that help us with wisdom, not ones that confuse quantity with quality. I thought it was silly.


iPod secrets
Wednesday, April 6, 2005

These may not be secrets but I didn't know about them. I listen to a lot of books and talks on my iPod and a recurrent irritation has been getting half way through an hour long talk and being interrupted only to find that the iPod has lost my place (either because one of my kids has used it or because it needed to recharge). The AAC format does a better job of keeping track of where you left off but it hasn't been perfect.

Why, I wondered, hasn't Apple made it possible to use the scroll wheel to move around in a song? Why, I wondered, haven't they made it possible to speed up or slow down the text of an audiobook? It turns out that they have? I was trying to think about the problem today from Apple's perspective. They don't want to add more controls. The scroll wheel works great as a volume adjuster and that is probably the most common thing people will want to do. The menu, play/pause, next and previous buttons are taken. Hmmm, wait a minute. What does the center button do when a song is playing? It makes a menu selection when not on the song screen but it's free where you are. So I started clicking it.

It turns out that a double click changes the interface so that the scroll wheel lets you rate the song. If you have an audio book, a second double click lets you adjust the speed from slower to normal to faster. Finally, if the song is playing and you hold the button down for a bit and then let it up, the scroll wheel lets you adjust your position within the track! Yes!

I don't know how long this functionality has been in the iPod or how common this knowledge is. I have the feeling, however, that it is not all that well known so spread the word.

(Of course, I could have googled and found out but where's the fun in that?)


Quick Review: Witch
Wednesday, April 6, 2005

Have you ever wanted to switch to a certain window — not just the application it belongs to? While you can use Exposé to switch windows, doing so can be very clumsy if you're the keyboard-only type of user. And don't all of these windows look just the same when they are scaled down?

Witch is a window switcher that operates similarly to the Mac OS X application switcher. You can switch between every application's window or just the current applications. It gets a bit slow when you have a lot of windows (but so does Expose). You can customize the appearance of the switcher pane and adjust the hot keys that activate it. Witch is a handy application ugh, no pun intended). Recommended.


Malcom Gladwell reviews Jared Diamond's Collapse
Tuesday, April 5, 2005

Malcom Gladwell has a nicely written review of Jared Diamond's Collapse. Diamond finds the idea that "civilizations are destroyed by forces outside their control, by acts of God" wanting. Rather, "The lesson of 'Collapse' is that societies, as often as not, aren't murdered. They commit suicide: they slit their wrists and then, in the course of many decades, stand by passively and watch themselves bleed to death." I've added the book to my reading list...


Harper's weekly
Tuesday, April 5, 2005

In Shanghai, a man stabbed and killed another man for selling their jointly owned imaginary cyber-sword without sharing the proceeds.

No comment.


Clark Aldrich on Simulational Education
Tuesday, April 5, 2005

As the co-founder of a simulation company, it's not too surprising that Clark Aldrich thinks simulation is the way to bring education into the modern era. Though I think he overstates the case against "linear media" (i.e., books, movies, speech) and overhypes the benefits of simulation and educational games, the research he cites regarding the success of some simulation for teaching leadership would be interesting to see in detail. I firmly believe in the use of manipulatives in learning -- playing with blocks, knitting, taking real things apart and putting them back together -- I don't see how virtual manipulatives can serve in this role. A simulation of a rattlesnake on the trail isn't a rattlesnake on the trail. A simulation of magnetism isn't magnetism. This doesn't mean that simulation shouldn't be added to our toolbox, it just means that we shouldn't add it uncritically or at the expense of other tools.


Clay Shirkey at Emerging Technology
Monday, April 4, 2005

ITConversations has Clay Shirky's "Ontology is Overrated: Links, Tags, and Post-hoc Metadata" talk from the O'Reilly Emerging Technology Conference. It's really good. Though he conflates classification and categorization, he makes a great case for why we want non-binary groupings (flexible categorization with multiple shades of meaning) and how systems like del.icio.us provide this without requiring semantic pre-analysis. Recommended.


Good example of sophisticated technology succuming to simple means
Friday, April 1, 2005

This wonderfully clever exploit is actually from 2002 but it is still a good warning that our technology may not be as fool proof as we'd often like to believe.

A Japanese cryptographer has demonstrated how fingerprint recognition devices can be fooled using a combination of low cunning, cheap kitchen supplies and a digital camera.


Value versus Cost
Friday, April 1, 2005

Alan Perlis said that Lisp programmers "know the value of everything and the cost of nothing" because every Lisp expression returned something (*) but Lisp's high-level nature made it hard for neophytes (and often wizards!) to know how much work the expression actually involved. These days modern hardware architectures -- with their microcode, pipelining and other wizardry -- can make even C seem high-level. Usually, fast is fast enough and compilers are "smart" enough and this isn't a big problem. Sometimes, however, we still need to understand what is going on under the hood.

To this end, Jonathan Rentzsch has a gentle introduction to the power and pitfalls of byte alignment (focusing on the PowerPC platform). I had a vague idea of what was involved but he lays it out clearly and with vigor and I learned a lot!

(*) this was before a form like (values) could allow an expression to return nothing...


Plants show a new trick
Friday, April 1, 2005

I'd love to know more about this research:

Plants inherit secret stashes of genetic information from their long-dead ancestors and can use them to correct errors in their own genes -- a startling capacity for DNA editing and self-repair wholly unanticipated by modern genetics, researchers said ....

How do the plants determine when to use this capability? How done the search through the space of possible ancestor genes work? Do similar capabilities exist in animal or mammals?

It's wonderful to keep being surprised by how the world works!


Speaking of the OS X Address Book...
Tuesday, March 22, 2005

Something tells me that this page on scripting the Address Book wasn't supposed to be linked from a search! <smile> Note that it is correctly not linked from the main applications page.


Quick Review: Thunderbird
Tuesday, March 22, 2005

Even though I've heard good things about it, I'm not going to use Thunderbird for OS X. Here's why:

Those two reasons are enough to keep me away from it. The OS X Key Chain and Address Book are two great technologies that solve common problems (password and address management) at the OS level. I don't want another list of addresses and passwords, I want Thunderbird to use the ones I already have.


Will Wright at Game Developer's Conference
Monday, March 21, 2005

Will Wright does it again: creates a new way of thinking about games and game creation. This report on his Game Developer's talk is fascinating. Wright presents an open ended game that goes from single celled creatures to interstellar civilization while letting the players be the ultimate content creators. I guess the idea is that if you let them come, they will build it. Very cool.


I like Apple Mail but...
Friday, March 18, 2005

Apple mail is a decent mail application. I've heard that Thunderbird is better but Mail works and works pretty well.

But! There is no easy way for me to go in and fix all of the signatures (30-40) that I've created. That is damn irritating and, it seems to me, stupid.

Whine, whine, whine, moan, moan, moan.


Software and Normal Accidents
Friday, March 18, 2005

Just found some random notes taken back when I was reading Normal Accidents. That book covers mostly physical systems where coupling occurs via spatial or temporal proximity and where failures arise via hidden and unwanted connections. Perrow's ideas apply to software too though the immediate connections are a bit tenuous: we have coupling, we have common systems, we have large systems, we have hidden and unwanted connections. So what are the normal accidents waiting to happen in our software systems. One type arises from overwhelming complexity. As Perrow says when discussing nuclear attack warning systems:

It is an interesting case to reflect upon: at some point does the complexity of a system and its coupling become so enormous that a system no longer exists? ... We cannot be sure that it really constitutes a viable system. It just may collapse in confusion!

I think a more interesting line of attack comes from our desires to refactor and find the common routines. We all know the feeling of a subroutine or function trying to do too much, the sickness of a routine pulled in different directions as a system evolves. To me, this is one of the positive aspects of the Feyerabend project: as systems grow, we can no longer hope to achieve crystalline purity. Instead, we need to find an organic architecture that tolerates diversity and error.

Finally, I think it might be interesting to think of software development as as transformative process (though I admit to not being quite clear what we are transforming into what!). If it is, then what do we monitor to maintain the process? Checkins and Checkouts? Number of changes? Bugs corrected and added? Number of function points? Number of tests? What else.

If anyone has ideas about this, please let me know.


Jon Udel on del.icio.us
Wednesday, March 16, 2005

The breadth and depth of Jon Udell's work has always amazed me. I just finished listening to a screencast on the collaborative bookmarking service del.icio.us that I mentioned a while back. The screencast is a tour de force: short, pithy, entertaining and profound.


Classification and Categorization: a difference that makes a difference
Thursday, March 10, 2005

Each discipline learns its own lingo and style. Sometimes it's hard to see the substance for the style - after all, isn't form supposed to follow function? That's how I felt when I first started reading this paper by Elin K. Jacob. What are all these words doing here! My philosophy and liberal arts background stood me in good stead, however, and things started to flow again after a few of pages. I was glad I persevered because the paper does make several useful points.

The paper cause d'etre is the current foment in Library Sciences and Philosophy regarding information and our relationship with it. It's a discussion that drives pure engineers batty - let's just build the thing already - but that is foundational to getting things right in the long run. As people like Foucault pointed out, architectures (whether of buildings or systems) change the people that inhabit them; power and control exist even when there is no guiding hand. It behooves us therefore to think about what we create; especially about the long term effects of the relationships that our systems support and enable.

Elin's goal is to untangle and taxonimize the differences between categorization and classification and then to use the taxonomy to create better human / machine systems. To summarize, Elin defines categorization as dividing the world into groups of perceptually similar entities - note that the metric used is context dependent and shifting (cf. Barsalou and others). Classification, on the other hand, refers to grouping things based on some predefined systems in an "orderly and systematic assignment of each entity to one and only one class within a system of mutually exclusive and nonoverlapping classes." (emphasis mine). Categorization creates new relationships; classification exploits existing ones (it is defining).

As an aside, Elin points out the parallel between some basic assumptions of knowledge classification ("universal order, unity of knowledge, similarity of class members, and intrinsic essence") and the classical theory of categories (essential features, hierarchical structure, category definition viewed as a summary) but doesn't explore in depth how the toppling of classical category theory by Wittgenstein and Rosch has altered views in knowledge classification.

The paper goes on to discus how the mechanisms of classification and categorization differ in their attempts to establish order in the world. To summarize very roughly: classification is rigorous and fixed (you're not really "allowed" to alter the classes once you start); the structures (usually hierarchical) produced form powerful external scaffolding that minimizes cognitive load. Categorization is fluid and variable; the frameworks produced are often ephemeral. It is categorization, however, the exhibits true creativity in the never-ending play of flexible and dynamic relationship. Classification is knowledge bearing: knowing an entity's class tells me much about it because the class exists as part of a fixed structure; knowing an entity's category may not tell me much of anything at all because categorization is extremely context dependent...

The two techniques complement each other; This is from the paper's conclusion:

... the strength of classification is its ability to establish relationships ... that are stable and meaningful. But the rigidity of structure that supports these relationships has its corresponding disadvantages. In particular, traditional classification systems are context-independent: ... these systems [can] severely constrain the individual's ability to communicate with the system in a meaningful and productive manner. In contrast, systems of categorization ... are highly responsive to ... the immediate context. ... But the responsiveness and flexibility of the ... system effectively prohibit the establishment of meaningful relationships because categories are created by the individual, not the system, and are thus fleeting and ephemeral.

What we need, I think, is a theory of how to move smoothly from categorization to classification. We do this constantly already but not necessarily as well as we could or as well as we need to as the information flux of our environment continues to increase.


metabang
Thursday, March 10, 2005

By the way, I've moved unCLog (and Polliblog) to my new website, metabang.com. The old site is redirecting so I hope I didn't screw things up for my vast and loyal following <smile>. There's not much to metabang at the moment; that ought to change eventually.


work and home life
Thursday, March 10, 2005

Have made my posts especially itinerant. I hope that that will change in a month or two.


Name that Baby
Tuesday, March 8, 2005

The NameVoyager visually presents the popularity of names over time. The sources of the data are unknown and, presumably, biased. It also has one horrible flaw: when multiple names are shown, they are not scaled so that their counts can be compared. Finally, it uses an unnecessary animation to transition between searches - the animation is pretty but adds nothing to the presentation of the data. Even with these flaws, however, the NameVoyager is pretty damn cool. It's fun to see how popularity ebbs and flows.


What about the (regular) flu
Friday, February 25, 2005

Now that the election is just a dull ache in half this country's hearts the flu vaccine hullabaloo has faded. It's still interesting to see how the disease is actually progressing and the CDC makes that easy to do. Here is an animated GIF showing this season's spread (green and yellow are good; blue and red are bad).

I'm no graphics artist (and if you didn't know that before, you do now ) but it's clear that the season has hit most of the country pretty hard. The CDC's weekly summary has more details and this graph makes it seem that we still have a ways to go before the achy season is behind us.


I don't see what to see in C
Thursday, February 24, 2005

I recently spent a few hours hacking (trying to hack!) gnuchess. What does anyone see in C? There's all this, this, syntax and brackets in odd places and variable declarations and, and, muck. It's horrible. Back in college I wrote C code and actually got paid for it. I liked it. I thought it was cool and that nothing could be more, well, profound that *s++ = *t++. I didn't know any better.

Lisp may have lots of parenthesis (though everyone has seen this, right?) but they all go in the expected places; everything is prefix (we'll leave loop out of the loop, so to speak), and it takes a tenth of the space to say as much.


Personal certificates for Apple Mail
Thursday, February 24, 2005

There are probably "better" ways of doing this for those in the know but Joar Wingfors has a very nice set of instructions for creating and installing a personal security certificate for Apple Mail.


Biologists do have a sense of humor
Thursday, February 24, 2005

I've already forgotten where I came across these links but the NYT Week in Review has a wonderful article on the naming of insects

Scientists may be serious people, engaged in the pursuit of objective truth. But when it comes to naming species, they often let their hair down.

If you're really interested, you can follow up at Curiosities of Biological Nomenclature. Adonnadonna primadonna anyone!


Brrr
Monday, February 21, 2005

It may be chilly in New England but the latest study from the Scripps Institution of Oceanography in California should finally close the door on those who say global warming needs more study.

A leading US team of climate researchers on Friday released "the most compelling evidence yet" that human activities are responsible for global warming. They said their analysis should "wipe out" claims by skeptics that recent warming is due to non-human factors such as natural fluctuations in climate or variations in solar or volcanic activity.

But it won't.


Quick Review: iSnip
Friday, February 18, 2005

I've tried quite a few OS X clipboard replacements but none have quite felt right. Austin Sarner and Dave DeLong's iSnip may have changed this. It records the last 20 cuts or copies into a readily accessible history and lets you categorize your clippings into as many additional folders as necessary. It looks good, is easy to use and feels solid. Recommended.


Nice write up on SSH tunneling / port forwarding
Thursday, February 17, 2005

This is a nice how-to article on securing your e-mail under OS X with SSH tunneling and port forwarding.


Climate change on the grid
Monday, February 14, 2005

Ars Technica has a brief interview with Dr. David Stainforth from the University of Oxford, the lead author of the recent study from climateprediction.net. It's interesting that the study gains some publicity just because it used distributed computing. I suspect that that won't matter much a few years from now. Of course, it's likely that most everyone is going to ignore this study just like they have ignored all the rest. In that case, we may all be too busy fighting to survive to care about distributed computing one way or the other.


I love benchmarking
Friday, February 11, 2005

Andres Fuchs has some very nice SBCL bits and pieces over at boinkor.net. The one I like most is his autobench generated benchmark graphs.


Will Wright debates (talks with) Jaron Lanier
Tuesday, February 8, 2005

In this podcast from IT Conversations, game designer Will Wright and Virtual Reality inventor Jaron Lanier talk about whether computers are augmenting our human capacities or letting them atrophy. The discussion is styled as a debate but the two seem to be more in agreement than in contention. I've only heard the first half so far but the signal to noise ratio seems on the low side. I find some of their comments about education to be insightful but most of them are on the naive side (I ask myself, does this just mean that I don't agree with them?!). They are certainly correct to say that education and play are too often divorced in our school systems but that does not mean that all education should be fun. Some stuff is damn hard to learn, it takes motivation, effort and intellectual work. On the plus side, Lanier does mention that even the most complex of digital worlds is much simpler that real, messy, reality. Kids (and adults) can do incredible things on their computer but still be incompetent dolts when faced with the real.


Quick Review: Disk Inventory X
Tuesday, February 8, 2005

Disk Inventory X is a nifty application styled along the same lines as KDirStat and WinDirStat. It analyses a disk drive and displays the file size results as a treemap:

It's not feature rich but it does what it does pretty well. Worth keeping an eye on.


Why does Windows suck
Sunday, February 6, 2005

Ars Technica provides a sociological analysis to Mark Morford's San Francisco gate query.

Straight up, I've got an honest answer for Morford. People simply react in different ways to the technology's failings because they're ambivalent about its use. A few people get royally fed up, and move to another platform. Among enthusiasts, it's often Linux, but sometimes MacOS. In my own experience consulting, the less technical tend to bail from Windows to the Mac, but that doesn't happen all that often. Some people decide to get their Windows learn on, and make sure they steer clear of problems. But any way you slice it, the majority of the desktop using populace stays on Windows.


Jonathon Edwards and subtext
Sunday, February 6, 2005

Subtext is an experimental, non-strict, lazy, functional, prototype based computer language (did you catch all those adjectives!). Jon Udell mentioned it a little bit ago and I just watched the presentation. I find it both interesting and wrong headed. Copy and paste is one of the ways we create things and for that and other reasons, there is much to be said for prototypes. But copy and paste is not the only way we create and it seems that subtext is too limiting. I want to be able to do what subtext does and type. Still, Edwards's has his heart and mind in the right place: programming is too hard and we need to do something about it!


Donald Norman
Thursday, February 3, 2005

Design, interface, and affordance guru Donald Norman speaks about emotional design: putting our emotions into what we create and understanding how these creations can benefit from having their own emotional life. He has great examples from a panoply of consumer products (from orange juicers to coffee makers to alarm clocks). Norman's best point is that we should aim for designs that evoke strong emotions (love and hatred); if no one hates our work, then no one will love it. It may be good but it will never be great.


Normal Accidents
Wednesday, February 2, 2005

Charles Perrow's 1985 book Normal Accidents is a tour de force of systems analysis. Read it and learn about all the thousands of ways things can go wrong.


Looks fun: iPresent it
Wednesday, February 2, 2005

I don't have an iPod Photo, but if I did, this software seems like it would be handy. It's a simple idea, but a clever one: use the iPod photo to give personal presentations!


Reading, listening...
Wednesday, February 2, 2005

I've been doing a lot of reading and listening to IT Conversations podcasts lately but haven't found the time to write about much. I hope to catch up soon.


Barry Schwartz on More is less
Saturday, January 29, 2005

Barry Schwartz is a psychologist at Swarthmore college in Pennsylvania. In this PodCast from IT Conversations, he talks about the downsides of choice.

We can't have it all, and worse yet the desire to have it all and the illusion that we can is one of the principal sources of torture of modern affluent free and autonomous thinkers."

He makes the following points:

  • It is impossible to have your expectations exceeded in a world of infinite choice
  • More choices means more regrets and, therefore, more paralysis
  • When choices are few, you can blame the world for not providing any good ones; when choices are infinite, you can only blame yourself when something turns out wrong.

Not to be too political (that's for Polliblog), but it seems like our government is still under the illusion that more choice is always better (private accounts, school choice, etc). I agree with Schwartz, we shouldn't have to think about which medicine is best for us or which school is best for our kids or which accounts to invest in for Social Security. It would be far, far better to have the system set up so that we can expect the given -- be it doctor, medicine, school, or investment option -- to be good.


Nice interview with Richard Gabriel
Friday, January 28, 2005

Richard Gabriel is an interesting man: software developer, Lisper, co-founder of Lucid, poet and a Sun engineer. He has a nice interview over at java.sun.com.


I really like Ars Technica
Friday, January 28, 2005

The articles in Ars Technica are deep, technically compelling and astonishingly informative. I've always been much more of a software guy than a hardware guy but I like to have some idea of what's happening inside the box. Ars Technica's interviews, reviews and commentary helps me keep up to date when I want to. All this, and the writing is good too!

What ends up happening is that the spring stretches and compresses throughout the steady succession of process shrinks and machine generations in a kind of rhythmical motion that's out of phase with the drumbeat of Moore's Curves. It's this stretching and compressing, where storage moves relatively further away from and closer to the ALUs while the overall structure of the machine stays fixed and all of the numbers involved scale downwards at different rates, that drives the cyclical design phenomenon that I mentioned above. Speaking very abstractly, different relative storage-to-ALU distances make for different kinds of architectural problem-solution pairs.

That bit about the steady succession of shrinking and rhythmical drumbeat of Moor'es curve is really nice.


Frans de Waal goes ape
Thursday, January 27, 2005

Sorry for the horrible title but Frans de Waal is a comparative primatologist who presents an intriguing and amusing talk over at IT converstations. What's most enjoyable is de Waal's focus on human biology and psychology as drivers of our culture and civilization; we are not nearly so technological as we'd like to think (I should know, I've had years of therapy and seem nowhere near exorcising some of the demons in my past!). Especially interesting is his contention that aggression plays a positive - indeed vital - role in group cohesion.


Eckart Wintzen and an immaterial economy
Thursday, January 27, 2005

Dutch Environmentalist and businessman Eckart Wintzen recommends moving towards an immaterial economy based more on human needs and less on the accumulation of stuff. He makes a pretty compelling case in this interview with Moira Gunn but changes like this seem to happen at far too glacial a pace. On the other hand, a journey of a thousand miles, ... takes a long time.


Quick Review: Amseq screen saver
Sunday, January 23, 2005

Amseq is the Animated Mandelbrot Sequence Generator.

(Note that it looks much better on a dark background ). It's very cool. Recommended.


Design Matters
Saturday, January 22, 2005

Great column from Communication Arts about design. Design matters in everything from Presidential Daily briefings to consumer product labeling.

What did the President know and when did he know it? In April 2004, the White House declassified one of the President's daily intelligence briefs issued just a month before September 11, 2001. The brief specifically states that Al-Qaeda and Bin Laden were planning attacks on the United States with hijacked airplanes.

Graphic designer Greg Storey was horrified. Not just because the information was all right there, but by the design. It's no wonder the information could be ignored. The document is an uninflected, grey mash of sans serif type. Might thousands have been saved if the information design had been better?

Between this and the butterfly ballot, we can wonder what might have been... But there are also cases where good design has helped:

The Guide shows the estimated yearly operating cost and energy consumption on a scale from least to most efficient. Consumers actually used it to consider not just purchase price, but cost over the life of the appliance. The success of the label convinced government regulators that you could modify consumer behavior through clear, friendly information design, gently pushing them towards more environmentally friendly, if slightly more expensive, purchases. Multiplied by millions of refrigerators, the energy savings have been enormous.

I think that because good interfaces are easy to use and good design are easy to understand, there is a strong tendency to think that they are equally easy to create. Any of us who have tried should know better but it can be hard to see mistakes when they occur so close to home.


The functional guts of the Kleisli query system
Friday, January 21, 2005

The functional guts of the Kleisli query system (you can get the paper from ACM or here) is a wonderfully intriguing paper. It's not particularly technical in itself though it relies on a fair amount of deep stuff from both database theory and functional programming theory. The paper describes how the Kleisli system integrates multiple relational database systems (from big iron ones like Oracle and Sybase down to a wide variety of very customized system built helter-skelter during the early days of the bioinformatics boom), which are distributed, evolving, high-volume and very heterogeneous. The example queries are easy to understand and it's clear that Kleisli is doing remarkable things under the hood. The system is built on top of Standard ML and it's use functional forms for optimization, of set comprehension for queries is particularly elegant.

As I said, this work is far outside my ken; I came across it via a lucky find on Lambda, the Ultimate. I'm glad I did.

(my apologies, by the way, for the missed up links to the non-ACM version of the paper and my thanks for those that found the link in the first place and helped me to fix it in the second!)


Check out some great new quotes
Wednesday, January 19, 2005

If you like that sort of thing...


Little tiny baby Borg
Wednesday, January 19, 2005

An article in Reuters noted by American Scientist:

New Experiments Unite Chips and Cells

UCLA researchers announced a new method to induce living cells to work as tiny robots aboard a microscopic silicon chip. In an article published on Sunday in the journal Nature Materials , Jianzhong Xi, Jacob Schmidt and Carlo Montemagno described a new technique for attaching living cells to chips. In one experiment they used a rat heart cell to power a device that moved on its own; a second device moved a like a primitive pair of frog legs. The researchers said their technique could one day lead to self-assembling machines.

Very cool.


Still failing after all these years
Tuesday, January 18, 2005

From the Washington Post (see also here and here)

The FBI said yesterday that a nearly $170 million computer system [known as Virtual Case File ] intended to help agents share data about terrorist threats and other criminal cases is seriously deficient and will be largely abandoned before it is launched.

...

the FBI has concluded that the system, the latest version of which was provided by Science Applications International Corp. of San Diego last month, is already outdated.

Maybe we'll have a chance to learn something this time... but I doubt it. Of course, the real questions to my mind are "Would doing it in Lisp have helped?", "Would doing this in the culture that Lisp promotes have helped?" and so?


An Unquiet Mind : A Memoir of Moods and Madness
Saturday, January 15, 2005

Psychiatrist, author, teacher, MacArthur Fellow and much more, Kay Jamison describes the life of the manic-depressive in writing that is lyrical, deeply moving and often profound. Jamison suffers from manic-depressive or bipolar disorder and her careful chronicle of the joyful passions, bewildering madness and numbing crashes of the disease is both personal and clinical.

There is a particular kind of pain, elation, loneliness, and terror involved in this kind of madness. When you're high it's tremendous. The ideas and feelings are fast and frequent like shooting stars, and you follow them until you find better and brighter ones... But, somewhere, this changes. The fast ideas are far too fast, and there are far too many; overwhelming confusion replaces clarity. Memory goes. Humor and absorption on friends' faces are replaced by fear and concern. Everything previously moving with the grain is now against - you are irritable, angry, frightened, uncontrollable, and enmeshed totally in the blackest caves of the mind., You never knew those caves were there. It will never end, for madness carves its own reality.

She has "... become fundamentally and deeply skeptical that anyone who does not have this illness can truly understand it" but her books provide at least partial pictures for those who can make the effort.

Mental illness remains a pervasive stigma in our society. Those who suffer from it are a legion of the unannounced and, far too often, the unhelped. Books like this need to be read by more people so that we can continue to expand our compassion for those different from ourselves (be it race, or gender, or nationality, or religion, or brain disfunction).


The wheels of science grind exceedingly fine
Thursday, January 13, 2005

But it's nice to know that eating less and exercising more really will help you lose weight.

On the other hand, I hope that Ann M. Yeneman isn't serious when she says:

Added Ann M . Veneman, secretary of the U.S. Department of Agriculture (USDA): "The new guidelines have additional science incorporated, but many of the recommendations are not significantly different than what's been recommended in the past. This was the first time we used an evidence-based approach to reviewing research."

What have they been using, their gut (no pun intended)?


What is it like to be a machine
Wednesday, January 12, 2005

Another item from the way back machine...

The definition of machine

Combination of mechanical or electrical parts

1. a. A device consisting of fixed and moving parts that modifies mechanical energy and transmits it in a more useful form.

b. A simple device, such as a lever, a pulley, or an inclined plane, that alters the magnitude or direction, or both, of an applied force; a simple machine.

2. A system or device for doing work, as an automobile or a jackhammer, together with its power source and auxiliary equipment.

3. A system or device, such as a computer, that performs or assists in the performance of a human task: The machine is down.

and so on

Is an animal a machine? Is a plant? Is a person? Not by the above definitions except metaphorically.

What does it mean to be determined? To have free will? Do we want to say that Free Will is an illusion or that it has no meaning (that the concept treads on unsteady ground and, freed from the connections that spawned it, it would be best if it floated away from earth and never returned?)?

A system is deterministic if the same initial conditions result in the same outcome. I.e., if a certain state always results in some other certain future state. This seems reasonable but completely impractical. How can I know that some system is in a state (how can I set the limits of the system)?

To say that an outcome is determined means that I will be surprised if something else happens. The ball rolls down the plane. But (as Wittgenstein pointed out) machines can break. The ball might fall off (and I wouldn't be surprised) or it might roll up hill (and I'd be very surprised). What's the difference?

If a system becomes so complex that I cannot know the state that it is in? Does the system have free will? Is it no longer deterministic? To say that "If I knew the initial conditions, then..." is nonsense if I can never know the conditions (and know them precisely enough (deterministic chaos).

If a child says that "he could jump to the moon if his legs were strong enough", we smile. Is this the same as "if we knew the initial conditions with enough accuracy, we could predict..."?

I especially like the last paragraph.


Old notes on Time and Scale
Wednesday, January 12, 2005

Sometimes it's fun to go into the way back machine and see what the younger you was thinking. It's also sometimes embarrassing but I won't go there now! Here are some notes from October, 1999.

Where does time come from. Things happen but that's not the kind of time I care about. Agents perceive (directly or not at all) things happening. They notice (or not) the changes in their internal resources (how do they know that they are their resources and not just external changes. Things that are relevant happen at some rate(s) of change. If an agent can suspend itself (without loss), then time has a very different meaning.

We say "that took a long time" but that's only in relation to us. We all know that time, in this sense, is relative to US but I'm not sure if we attend to the fact.

Think of the tick or the dung beetle of the sphex wasp... they repeat their actions (indefinitely... until they die?). They cannot "step outside" themselves. Time has no meaning to them?

When does time consciousness enter the system?


CREF: An Editing Facility for Managing Structured Text
Tuesday, January 11, 2005

We go back into the archives to find a paper documenting Kent Pitman's 1985 work on a fancy cross referencing editor designed to be used as part of a knowledge acquisition toolkit. This is a nice paper. Pitman describes the problems to be solved, how CREF solves them, what CREF fails to solve, and how other ideas might help solve them differently in the future.

CREF's main idea is to bring text into segments which can be annotated with keywords (shades of Del.icio.us, see here too) and connected with typed links (e.g., supersedes, summarizes, precedes, and so forth). Collections of segments can be named and manipulated and the same segment can appear in multiple collections. There is also some diagrammatic reasoning support and various other editing features.

The great thing about this is that a project's structure can evolve naturally and organically without having to worry about file systems, versioning, etc. I've been ruminating about coding in an environment that gave me multiple views of my code (e.g., all the methods for this class here and all the methods for that generic function there and all the versions of that function over there) without making me worry about files and organization and all of that. The organization should be dictated by the project, not by the technology; the interface should be dictated by the task, not by the computer.

It's encouraging to read Pitman's report because it shows that many of these ideas can be implemented and made to function. It's also depressing because he wrote this 20-years ago and we'll almost all still using glorified versions of vi and EMACS.


IT Conversations
Sunday, January 9, 2005

It seems as if there's something new every day; usually many things! Today's find (courtesy of Jon Udell) is IT Conversations:

Listener-supported audio programs, interviews and important events.

There are loads of interesting interviews and coverage of exciting events. This will help make my iPod even more fun.


Ping Pong
Sunday, January 9, 2005

Brian Mastenbrook responded to my brief comments about Orcinus. Here is my response... How nice, Dialog!


Quick Review: Longhand
Sunday, January 9, 2005

It's great when people have ideas that are wonderful and simple. Scott Fortmann-Roe has given us Longhand, a new OS X calculator

built from the ground up to facilitate calculation. Most other computer calculators try blindly to emulate the physical format of their predecessors. What works well in the real world, however, functions worst, and is often not desired, in its virtual sibling. By leveraging the capabilities provided by modern technology, Longhand allows you to perform everything from the most basic to the most complex of calculations with great ease.

That's smart. I've only played with it for a few minutes and can tell it will take a bit of get used to / customizing / reading the manual. That said, I can imagine this being so much more efficient and useful than other physical-calculator based calculators. Way cool.


Thoughts on Folksonomy
Friday, January 7, 2005

Most readers of this weblog have probably already seen Del.ici.us, the social bookmark website. It lets people share and categorize bookmarks collaboratively. The categorization exists in what has come to be called a Folksonomy (which is a nice neologism). This is good but as the Wikipedia entry on social bookmarking says

Drawbacks of current implementations include: single word categories, no mechanism to define or refine categories, no synonym/antonym control or related terms & no hierarchy.

Some see this as a benefit

If I had to sum up the Web's effects on the world, I'd say "surprised by simplicity." Unlike most other technologies, we're witnessing a shift to simpler apps over time, as with the way million dollar CMS systems and collaboration via Lotus Notes shifts to weblogs and wikis. del.icio.us hits that same pattern - not a single wasted feature, it just works the way the Web does.

Wikis are nice, e-mail is nice, wood stoves are nice too. Nonetheless I get tired of chopping wood sometimes and of managing SPAM and of having to edit multiple pages to manage to-do lists and categorization. Web applications may not have wasted features (see also this peon for general simplicity) but they often seem to waste my time and mental effort.

The way I see it (from a distance, through a glass, darkly), the semantic web provides mechanisms for structuring the web but is not structuring itself. That requires the addition of categorization and structure (C&S). A few years ago, categorization was done entirely by humans, was private and only occasionally persistent (i.e., bookmark lists). Today, Google does a sort of on the fly categorization, Clusty does real clustering, Wikis and Del.ici.us provide persistent, human created structure. Life is better.

There are, however, at least two opportunities here:

1. Applying topic tracking, categorization, clustering and other AI techniques to the creation and application of C&S

2. Extending the sort of things one can say in C&S. For example, adding the mechanisms to "define and refine categories, synonym/antonym control, related terms and hierarchy. These are all the sorts of things that "real" ontologies / taxonomies should have.

Doing this right would be good.


Orcinus
Thursday, January 6, 2005

I think that you should know about Orcinus if you don't already. It's well written, thoughtful and the implications of David Neiwert's opinions are chilling. If I stand anywhere on the political spectrum, it's on the left. I stand there because I think that the left is interested in personal freedom, dignity, and the rights of the individual. I stand there because I think that the right is, by and large, interested in itself. I stand there because I fear the hate I see engulfing this country as the politics of scarcity become a dominant theme. Some of this hate comes from the left, but most of what I see and certainly the most galling examples (e.g., Coulter or Limbaugh) come from the right. Neiwert argues all too convincingly that America is in danger of seeing the rise of another form of fascism.


A Visual Representation for Knowledge Structures
Wednesday, January 5, 2005

Michael Travers (who wrote LiveWorld) and has done interesting work in applying AI to the search for pharmaceuticals presents a knowledge representation interface designed to make understanding Cyc easier and using it more efficient. Cyc is a great big knowledge base (i.e., a database of facts) coupled with a slew of inference engines, etc. It is one of the older examples of Good Old Fashioned AI (GOFAI) and is the subject of both adulation and derision. Putting aside for the moment whether Cyc will ever "go meta", start reading newspapers and rename itself SkyNet, there is no doubt that a lot of human knowledge is formalized (for what it's worth) and contained in the Cyc Knowledge Base (KB). The knowledge is represented in a sort of Lisp like language (CycL) along with lots of documentation (english text). At the time of Traver's work, the main way to interact with Cyc was via the command line and web browser like tools. They were pretty bad.

Traver's Museum Unit Editor (MUE) was designed to force Cyc into a spatial metaphor (along the lines of Christopher Alexander's 1964 Notes on the Synthesis of Form). Much of this was abandoned because, frankly, Cyc doesn't fit into a spatial metaphor very well. However, the basic idea of seeing facts as rooms within rooms (containing facts), containing rooms (sub-facts) and objects (examples), and with gateways to still other rooms (related facts) is powerful enough to represent Cyc with a structure more amenable to the human mind (at least that's the claim, there aren't any real experiments described in the work... it does, however, seem plausible that such an interface would be cool, useful and fun to be in). MUE also used color, allowed objects (facts) to be in multiple places at once (that's just the way knowledge is, dammit!), and provided nice re-rooting operators to move from one "place" to another.

MUE was also used to browse other graphical structures like e-mail, text and program structure. It is part of a long line of similar work involved with finding useful representations of non-physical things. The big question, I think, is why so much of our time is still spent dealing with text. This paper is from 1989 -- 16 years ago! Why isn't it easier to graph stuff I care about (e.g., program source code, bibliographies of papers, pictures of my cats,...) and view / interact with it using tools beyond hierarchical file browsers and text editors? Something here is hard. What is it?

My answer is that there are two hard parts:

  • we still don't really know what are the good ways of interacting with non-physical and semi-physical things and
  • it's hard to describe the things we care about to a computer easily

The first problem has seen lots of work and there are lots of techniques. Few techniques have, however, been seen as useful enough to make the leap from the lab to the masses. Furthermore, there is not (as far as I know and, hey, what do I know?) any body of knowledge that says which techniques are best used in which situations and why.

The second problem in its full generality is equivalent to understanding natural language. On the other hand, it is also trivially about lots and lots of parsers that give there best shot to things like "all the headings in chapter three of my books" and "the sub-folders of 'People' are in the format 'last-name, first-name'." This seems related to my personal take on the Feyerabend project -- having many solutions, always active, always computing, and always competing. Yes, many will fail, and crash, and cause errors but somehow enough will succeed to get a good answer.


Quick Review: Amazing Slow Downer
Monday, January 3, 2005

The oddly named Amazing Slow Downer lets you slow down (or speed up) audio tracks under OS X or Windows. I've only used it under OS X. The software works as advertised. It has a clunky, very non-OS X interface and costs a whopping $44.95. That's a lot of money for what I imagine is some pretty simple signal processing. On the other hand, it's the only program I've found so far (after, admittedly, only about 17-seconds of heavy searching) that does this and it could save you lots of time listening to those books on tape. I'd give it an A for effort, a D for price, an F for interface and an overall grade of B-.


It isn't just for music anymore...
Thursday, December 30, 2004

This isn't what I think of what I hear "appropriate technology", but it really is as good an example as any.

iPod Helps Radiologists Manage Medical Images

The iPod is not just for music any more. Radiologists from the University of California, Los Angeles (UCLA), and their colleagues at other institutions from as far away as Europe and Australia are now using iPod devices to store medical images.

I think it is wonderful when new uses are found for existing technologies.


On the Robustness of Centrality Measures under Conditions of Imperfect Data
Thursday, December 23, 2004

Everyone is familiar with how polling (or sampling) can be used to estimate values for entire populations while examining only a tiny fraction of its members. This works very well (2000 and 2004 elections not withstanding) when the values in question are independent of the relationships among the members of the population. That is, we can estimate the average height of all 26-year olds by measuring only a few of them because the height of each 26-old is assumed to be independent of all others. The problem is that when the values we care about are relationships, sampling is no longer so easy. We can draw valid conclusions from samples of a population if and only if we have independence between each sample member but when we have relationships (graphs), independence is out the door (if for no other reason than that most real world graphs have friend of a friend structure: it's more likely that you my friends will know each other than not).

To pursue this, Borgatti, Carley and Krackhardt investigate one typical graph measure under varying noise conditions to see how the noise alters the sampling results. This is definitely useful and important research - not to mention a paper that practically writes itself! - and the authors do a good job explaining their methodology and results. The only serious limitation of their work is that they study only Erdos / Renyi random graphs (see here for a summary of Marc Newman's excellent SIAM survey on graph theoyr) and only random noise. As they themselves conclude:

A crucial limitation of this study is that we have studied only random error on random networks. This is appropriate as a first step in understanding how measurement error affects the calculation of network indices, but it should be clear the results could be quite different for practical settings in which (a) the data collection methodology makes systematic errors (such as more readily losing nodes with low degree), and (b) the networks themselves are not randomly constructed (as we expect for most human networks).

Erdos / Renyi random graphs are easy to analyze but by now everyone knows (yes, even your two year old) that they are very poor models for real networks - the ones we care about. If I have time, I'd like to follow up their results.


Prey
Thursday, December 23, 2004

Prey is a fast paced romp in a not too distance future where advances in nanotechnology, computer technology (especially distributed AI) and microbiology combine to produce autonomous self organizing particle swarms. Crichton does a nice job mixing science fact and science fiction and the story, while not completely believable, never strays so far from truth that it becomes hard to keep your disbelief suspended. One possible disappointment is that Crichton leaves the ending unclear and unresolved. This is probably for the best, however, as these technologies may be beyond our grasp and our own story - the story of our species on our planet - is equally unclear.


Did you hear about the altered Mousepox virus
Wednesday, December 22, 2004

This is from 2001 (see here and here and here) but I only found out about it from reading the introduction to Michael Crichton's Prey. The incident (and this even more recent one involving TB) show how correct David Suziki is when he says:

I am shocked at how little my colleagues in genetics pay attention to history. They actually forget how ignorant we are - that although we have achieved incredible manipulative powers, we know next to nothing about the real world in which those manipulations will reverberate.

Among other things, I'm a programmer. I'm constantly shocked - but no longer surprised - at how often I fail to see the obvious consequences of my design decisions. Thus my concern about GM food. I'm not too worried about what the biologists are trying to do (though they is much room for concern about social and moral implications); I'm concerned about things happening that no one intended. As a species, we're just not that smart.


A Theory of Programming
Wednesday, December 15, 2004

One thing missing when we talk about programming and programming languages is what we mean by the whole affair. What is programming? Who is it for? Who is doing the programming and why? It is an end or a means? The answers to these questions always exist in the sub-text (cf. Phil Agre's deconstructionist work and also the C2 wiki). They define the audience, the standards, and the conflicts. Are static types a good thing? Should as much as possible be dynamic? Are the best languages formal? The right answer is "well, it depends".

I've taken programming language courses that felt like we were the taxonomies of some bizarre family of arthropods. I've read papers that treated programming as the careful deduction of formal properties via mathematical proof. I've used environments that held my hand so tightly, I couldn't even think, let alone accomplish my tasks. My biases are obvious but what is less clear is the rational behind them. Reading Kay's history helped me expose a little of that and I'm hopeful that further excavation is possible.

(By the way, I'd be happy to have my first paragraph disproved. If you know of good discussions of meta-programming (and I don't mean macros <smile>), please let me know.)


An old review of Philip Agre's Portents of Planning
Wednesday, December 15, 2004

Agre critically reviews Miller, Galanter and Pribram's "Plans and the Structure of Behavior" by deconstructing the first paragraph of their introduction. The central hypothesis of their work is that "behavior has the structure that it does as the result of Plans" (note the capital 'P').

AI introductions typically merge (awkwardly and with distortions) technical and vernacular vocabularies. They "introduce and institutionalize distortions of genre, rhetoric, and logic" which require a "depth of rethinking and redoing" to put right.

P&SB merge 'deciding what to do' and 'describing what is done'. The "subject matter is not activity in the narrative present tense but rather thought about activity in the future tense." Your day has "a structure of its own, independently of you."

The irreconcilability of the formal and figurative modes of language:

Here, "propositions that are contradictory when stated in ordinary language become consistent when converted to the categories of the formal theories the text will later elaborate." "When a way of speaking so readily subsumes its negation, what could possibly falsify it?" "P&SB is a theory which seems somehow both vacuous and universal"!

P&SB divides cognition and activity in a gross Cartesian sense. But "human beings are, by nature and necessity, intimately involved with their surroundings in the physical and social world."

"Setting things straight requires an admission that things are just harder than [blurring the boundaries between representation and reality]. Living in the world requires a dialectical engagement, not just a fantasy merger. Using representations, whether of circumstances or of actions, requires the continual practical work of interpretation, not just the passive appeal to unmeditated correspondences."

(Note that my quotes are from a draft of Agre's paper and should not be used directly).


Seven Years in Tibet
Wednesday, December 15, 2004

This is an astonishing story, simply told. I've not seen the movie (and am not sure that I want to) since the nature of the movie, I think, would be to be about Brad Pitt whereas the nature of the book is to be about Tibet and Tibetans.

The glimpse provided into the many subcultures of Tibet is profound: Heinrich Harrer displays the Tibetans strengths and weaknesses, their simplicity and complexity, their minds and their heart. To know that the Chinese have wrought such havoc with this people is deeply saddening. (Though to pretend that America or Americans can take any moral high ground in this matter is to blind to our own native peoples and to the environmental disasters we are brewing for our planet).

If you want to see Tibet through the eyes of European as it as 60-years ago, this is probably the place to start. Highly recommended.


Hmmm, priorities, priorities
Wednesday, December 15, 2004

I'm as in favor of helping minorities as the next person -- after all, I program in Common Lisp -- but I also think that some minorities are more equal than others. For example, I rather fund basic science, fuel cells, ecological research, and (why not) computer science than the ridiculous boondoggle of a missile defense shield that doesn't even work well:

The first test in almost two years of the planned multi-billion dollar US anti-missile shield has failed.

The Pentagon said an interceptor missile did not take off and was automatically shut down on its launch pad in the central Pacific.

A target missile carrying a mock warhead had been fired 16 minutes earlier from Kodiak Island in Alaska.

This test had been delayed four days because of "bad weather at launch sites and, on Sunday, because a radio transmitter failed." I just hope our putative ballistic enemies are willing to wait until it's sunny out! Seems to me that $10-Billion could a year do a lot to leave no child behind, or help feed the poor or, what the heck, rebuild Iraq.


The Early History of Smalltalk
Tuesday, December 14, 2004

This paper of Alan Kay's is amazing! Kay covers the history of Smalltalk the language, of Xerox PARC, and of the many other threads that wove computing into the fabric of lives. Through it all Kay's chief focus is on education - on using computers as "thought amplifiers" (Papert).

Perhaps it's party because we can look at this time period in retrospect but an amazing amount of seminal work was accomplished in a very short time. Even beyond this, I find Kay's insights into education profound and refreshing:

... Knowledge is in its least interesting state when it is first being learned. The representations -- whether markings, allusions, or physical controls -- get in the way (almost take over as goals) and must be laboriously and painfully interpreted. From here there are several useful paths, two of which are important and intertwined.

The first is fluency, which in part is the process of building mental structures that disappear the interpretations of the representations. The letters and words of a sentence are experienced as meaning rather than as markings, the tennis racquet or keyboard becomes an extension of one's body, and so forth. If carried further one eventually becomes a kind of expert -- but without deep knowledge in other areas, attempts to generalize are usually too crisp and ill formed.

The second path is towards taking the knowledge as a metaphor that can illuminate other areas. But without fluency it is more likely that prior knowledge will hold sway and the metaphors from this side will be fuzzy and misleading.

The "trick", and I think that this is what liberal arts education is supposed to be about, is to get fluent and deep while building relationships with other fluent deep knowledge. Our society has lowered its aims so far that it is happy with "increases in scores" without daring to inquire whether any important threshold has been crossed. Being able to read a warning on a pill bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being able to fluently read and follow the 50-page argument in Paine's Common Sense and being able (and happy) to fluently write a critique or defense of it it. Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

This is strong stuff and from 1993! We live in a world of diminishing expectations and I don't see that anything much has changed for the better about our models of citizenship, education, computing for all, and so forth. Kay's vision, in other words, has not been realized. As he asks in closing "Where are the Dans and Adeles of the '80s and '90s [and 00s] that will take us to the next stage [of computing]"


Google digitizing world literature
Tuesday, December 14, 2004

Cool!

Google, the leading service for finding information on the internet, yesterday set out ambitious plans to become a catalogue and digital library for world literature.


What Brings a World into Being
Monday, December 13, 2004

The eclectic David Berlinski examines the metaphor of information as actor and creator that runs through modern biology (DNA as blueprint), consciousness studies (words as carriers), and cosmology (laws/equations as creators):

A novel brings a world into creation; a complicated molecule an organism. But these are the low taverns of thought. It is only when information is assigned the power to bring something into existence from nothing whatsoever that its essentially magical nature is revealed.

In spite of feeling that something slippery is happening under Berlinski clear prose, I do tend to favor his skepticism for at least three reasons:

  • What I take to be the essential correctness of Lakoff and Johnson's critique of abstract knowledge and their counterclaims of necessary physical embedding and framing,
  • Susan Oyama's wonderful works (such as Evolutions Eye) which argue strongly against the modern view that DNA is a "code" containing "information" which alone describes an organism, and
  • My dissatisfaction with the application of Shannon's content-less theory of information to so many topics. Shannon doesn't talk about information the way we do (and he knows very well) and when we use the term it's too easy to fall into the metaphorical trap.

I think that what happens when we interact with world via reading, what happens when a cell metabolizes and divides and what happens when the universe moves forward in time are all more subtle than we imagine. As Wittgenstein said, we tend to walk down well trodden pathways of error and it takes hard work to avoid the usual traps. I'm not sure what I think of Berlinski yet but I'm glad I've run into him and that a lot of his essays are available.


Fremder
Wednesday, December 8, 2004

This odd little book strolls through snippets of eclectic music, the Belousov-Zhabotinsky reaction, the sadness at the heart of things, old testament prophets, a Clockwork Orange like dystopia and sex. It's enough to make you think! Russell Hoban's body of work includes science fiction, children's books and opera so it's easy to see how he manages to pull so many disparate sources together. What is hard to understand is how he does it so well! The book is a wonderful read and full of interesting twists and turns of phrase. I'm not sure if it has a deeper meaning, but I'd recommend it regardless.


Weaving together a few threads
Wednesday, December 1, 2004

I've been mulling on several threads for the last or two. No productive thoughts but lots of smoke. Just lately, I come across several links that show others are heading down similar paths. For what it's worth, here are some of them:

  • Oliver Steele compares and contrasts language mavens with tool/IDE mavens
  • Sergey Dmitriev ponders on Language Oriented Programming. A quick read of the first several pages makes me wonder if he's familiar with Lisp?!
  • Jon Udell wonders when we're going to start writing applications that pay attention to what we're doing

When I organize my e-mail or conduct research on the Web, I exhibit predictable patterns of behavior. We have long expected but rarely experienced personal productivity software that absorbs those patterns, automates repetitive chores, and can be taught to improve its performance

I'm glad people are starting to think about these issues. I believe that there is a big research vein to tap combining non-trivial Artificial Intelligence with programming IDEs. If we can make programmers more productive, then we can hope to have better tools for everyone.


Blue at the Mizzen
Sunday, November 28, 2004

If it's starting to seem like unCLog is a book review site instead of Common Lisp site, I apologize! I've been too busy at work to push on any of my own projects or, for that matter, to think coherently!

In this, I envy Aubrey and Maturin their long sea voyages and I envy the Roke wizards in the immanent grove even more. We live in a culture of information glut where reflection is becoming less and less something we do and more and more just some language feature we want -- that's stretching it a bit, but ...

In any case, I recently finished Patrick O'Brian's final chapter in the Aubrey / Maturin saga. (There is a 21st book but it's only the first three chapters and some notes and seems published more to milk the cash cow than further the art... Of course, I haven't read it so I shouldn't prejudge). Blue at the Mizzen is a slightly disjointed but rousing tale of international intrigue, romance and adventure. Best of all, Jack Aubrey finally gets his promotion to Admiral. In the spirit of "always leave them wanting more", the book is a fitting end to Aubrey's adventures and O'brian's joyful chronicaling of them.


Legends, volumes 2 and 3
Tuesday, November 23, 2004

I found volumes 2 and 3 of the Legends series at my local library and thought I'd give them a listen. Legends is a four volume series of novellas edited by Robert Silverberg that brings together many of science fiction and fantasy luminaries. Though some of the stories were weak, it's an excellent set overall. Here are my quick takes. First, volume 2:

  • Robert Jordan's New Spring: A gripping prelude to his Wheel of Time series. This was my first experience of Jordan and hearing it made me run to the local library to find book one the next day.
  • Terry Pratchett's The Sea and Little Fishes: An amusing Discworld piece told with vim and vigor. I like Pratchett. He's funny, clever and very readable.
  • Orson Scott Card's The Grinning Man: Another amusing and enjoyable tale. I got tired of Card a long time ago after the promise of Ender's Game fell flat on weak characters. This one was interesting enough to make me think about giving him another look.

Now Volume 3:

  • Terry Goodkind's A Debt of Bones: I found this very weak with cloying characterization, predictable plot and disastrous dialog. On the other hand, the descriptive style is excellent and the ideas were interesting. Maybe his longer works are better but this piece didn't make me want to find out.
  • Ursula K. Le Guin's Dragonfly: This tells the story between The Farthest Shore and The Other Wind. I dind't llke the Other Wind when I first read it -- it differed too much from the original trilogy and Earthsea had seared my heart when I was but a lad. Dragonfly fills in many missing details and place the Other Wind on firmer footing. It was a wonderful story. In fact, I enjoyed it so much that when it finished. I turned the tape player off and never heard what Tad Williams had to say!

Quick update
Monday, November 22, 2004

Been reading lots of books and finished several over the weekend. I'll post my mini-reviews as soon as I have a chance to think about them for more than a minute.

The wonderful Mac OS X collaborative editing tool SubEthaEdit has just released an update to version 2.1.

In case you've been living under a rock, Paul Graham has a new essay out on American quality - the good and the ugly.

I've just gotten out from under a very large rock at work and have hopes that I might be able to push a bit on the half dozen Lisp projects I've been mulling over for the last six months. I hope so. I don't like the feeling of still born ideas.


Friday Check in
Friday, November 19, 2004

I've been away most of the week at a very wacky, very disorganized work related meeting. It was dispiriting but at least it's over. One nice thing about working meetings is that I always seem to have lots of little ideas during them. I just need to write them all done and see if any of them really make sense!

I've also fixed the main unCLog glitch. As you can see, there are a few remaining problems but nothing so egregious. Have a great weekend.


More weblog glithces [sic]
Monday, November 15, 2004

No, I don't know what happened to my sidebar. That's one of the horrible things about computers -- it only takes one bit to turn the good into the awful. I'll get fixed soon. Promise.


Announcing PolliBlog
Monday, November 15, 2004

I've always felt uncomfortable airing my political views on a weblog ostensibly devoted to Common Lisp and computer science. The obvious solution of creating another weblog finally occured to me. Thus is born Polliblog, a weblog devoted to politics, transformation and my penchant for neologistic puns. Come in if you want, stay out if it pleases you. It won't be Daily Kos or Eschaton but it will be me.


Uh, hello, it is 2004 isn't it?
Thursday, November 11, 2004

You would think that "we" would have some ideas of how to design and build a mission critical system by now where "we" means the general computer scientist / computer industry person. But this NYT story makes it clear that too many obvious principles haven't leaked out into the world the way they should have.

  • Involve users in the process?
  • Make simple things simple?
  • Test first?

Nah, let's use MS Windows and slap together a bunch junk and then foist it off on people who can't do their jobs because the software is so piss poor. Yeah, that's the ticket. Sigh.


Epidemeology and Data Mining
Tuesday, November 9, 2004

From American Scientist:

Specter of Avian Flu's Spread Turns CDC Scientists Into Detectives

And so their sleuth work extends from monitoring e-mail and Web sites for pertinent postings that could portend the first wave of a coming pandemic to trying to get invited over to Asia so that they can interview those closest to the avian flu's victims. That kind of access lets Fukuda and Uyeki track the last interactions of people before they succumb to infection, without having to tangle with need-to-know disclosure rules that often vary radically from nation to nation.

I heard a talk not very long ago (but long enough that I can't remember the name of the person who gave it... sigh) by a medical doctor who described the sorts of things that can be done with done (barring potential privacy concerns) if it was collected and the challenges in collecting it and analyzing it. Good stuff.


Speaking of maps
Tuesday, November 9, 2004

Michael Hannemann pointed out a nice set of maps with associated discussion. There's also this nice distortion map showing red and blue weighted by population density (which explains a heck of a lot about who voted for whom).

Update: thanks to those who pointed the way to this link. (There's that Mark Newman fellow again! Does he ever sleep?). In case you don't sleep, there are even more maps here.


Maps, maps, maps
Friday, November 5, 2004

People have probably already seen this map from USA today. It's interesting how things fall apart (which is also the title of a good novel by Chinua Achebe and a reference to Yeats wonderful poem). There is surely a lot of data mashing one can do but we're not going to get around the great divide.


Watching Fog of War
Friday, November 5, 2004

This Errol Morris film (transcript) is hauntingly powerful. Its relevance to today is obvious but to my mind what matters most is listening to an old man tell his story. Yes McNamara was at the center of many things but we all have stories to tell if we take the time (are given the time) to think. Computers seem to suck time from the air -- thinking time at least. They are faster at many things, yes, but they can easily break time up into unusable chunks. I'm also reading Tyranny of the Moment: Fast and Slow Time in the Information Age by Thomas Hylland Eriksen. I've only just begun but I think he's on the mark.


Vote
Tuesday, November 2, 2004

Regardless of who you support, today is the day to vote. Make your voice heard. It's what democracy is all about.


Statistical Programming with R, part 1
Tuesday, November 2, 2004

This is a nice introduction to R, an open source statistical programming language in the S/S-PLUS family. The authors briefly describe R's history and availability and then demonstrate basic R statistical facilities.

Like Perl, R provides more than one way to do things. Like Scheme or Lisp, R can be used interactively and via scripts and programs. I've heard that R was influenced by scheme but the examples given don't shed any light on this. R looks nice but I'd rather have something that was fully integrated with my Lisp environment. A long time ago, EKSL was responsible for CLASP, the Common Lisp Analytical Statistical Package. It's been underfunded recently and has been accumulated a severe maintenance debt. Still, I'm hopeful that we'll find the time and energy to update it. Before we do, I'll certainly take a closer look at R to see what commonalities and synergy can be found!


Machine learning and politics
Friday, October 29, 2004

Aleks Jakulin has a web site devoted to Data Mining voting roll calls in the US Senate. The site includes links to other researchers in the area. Very nice.


Tune Finder
Wednesday, October 27, 2004

Karelia software has announced Tune Finder, an OS X program that lets you search for music if you know the tune! You type in the notes on a virtual keyboard and does the search. The library is small at roughly 13,000 tunes but I assume it can and will grow. It's also too bad that you can't whistle or hum your way to the music -- that would be neat. Nonetheless, it's an interesting idea and sounds like fun. The database of music might also be of interest to the machine learning community.


Flexibility and Specificity in Infant Motor Skill Acquistion
Tuesday, October 26, 2004

I recently heard a talk by Karen Adolph, a developmental psychologist at New York University. She talked about how balance is fundamental to learning motor skills in general and to learning how to sit, crawl, creep and walk in particular. She has performed dozens (if not hundreds!) of studies of infants and adults performing these tasks in a variety of situations: moving up and down slopes, across bridges with and without support, wearing weights, changing the texture of the surface or of the footwear. What she has found supports the theory that two very different learning systems are involved in all of this. One takes a long time to train but has good transfer (what is learned in one situation can be adapted quickly to other similar situations). The other can learn very quickly (in one or two trials) but is also very situationally specific.

The first learning system was researched heavily by Harry Harlow back in the 1950s. He called it learning set theory or learning to learn. His research focus was with monkeys but similar results have been found with humans as well. The gist of the experimental setup is that the monkey gets two things to choose from, one of which has a reward under it. The things might be two different shapes or two shapes with different colors or whatever. In each trial, the reward is always under the same shape. The first time the monkey goes through this experiment, it takes hundreds or even thousands (!) of trials before it learns that the reward is always under the circle (for example). It also takes a very long time for the monkey to learn the correct shape on the next trial, and the next. Eventually, however, something wonderful happens: the monkey understands that it can determine where the reward will be after a single trial -- if the raisin was under the triangle this time, keep picking the triangle. Otherwise, pick the other shape. The monkey has learned to learn and this learning transfers well to other similar situations. Adolph believes that balance is learned in a similar fashion. Infants explore constantly while their bodies and their environment change. They learn slowly to master each postural system (i.e., sitting, crawling, creeping or walking) but their new skills transfer to all kinds of movement situations.

The other learning style is associative learning. Here, we quickly learn to associate one thing or situation with another. As we all know, this happens very quickly. What is less obvious is that it doesn't transfer readily -- a fact learned in one situation does not come to mind readily in others even when these others may appear (usually on retrospect) to be very similar. Adolph has a experimental setup that demonstrates this very effectively. A 20 to 30 foot pathway is constructed with one 4 foot section replaced by a material with very different friction (e.g., teflon) or by a very spongy material (foam rubber) covered with fabric. In either case, the different section is visibly obvious. Infants will walk down the path and slip on the teflon or fall into the form on the first trial but quickly learn the difference and navigate the odd section on later trials. This doesn't sound all that odd, but wait.

Adolph also tests adults. They sign a consent form that describes the experimental setup: that they will be walking down a path that has a section replaced by different friction or material. Then, they walk down the path with the obviously different section and..., they fall down! This is hard to believe but I've seen the film. Equally incredible is that infants that have learned about the 'funny' section in one setup don't transfer this knowledge to other, similar, setups. For example, if they learn about the foam section when it is covered with blue fabric, this knowledge doesn't transfer when the foam is covered with red fabric! (I'm not sure about transfer with adults).

This is great research on a topic which is central to embodied learning. I'm really curious about computational models that fit into the learning set framework. Neural networks are an obvious candidate but I'm not aware of them fitting into the learning to learn category. If anyone knows any details, please drop me a line.


The Supernaturalist
Monday, October 25, 2004

Eoin Colfer is a former school teacher who now writes -- nominally -- kids books (including the wonderful Artemis Fowl series one, two and three). His worlds are wacky, his characters a bit over the top, and his plots are twisty, curvy and very satisfying. The Supernaturalist relates the adventures of Cosmo Hill, a orphan who, thanks to a near death experience, can see strange spherical blue creatures that appear to be sucking the life out of suffering humans. Cosmo and his newly found street friends battle the parasites, teams of assault lawyers, the police, the Myishi corporation and thuggish orphanage guards in a plot full of crises and satisfying switch backs. It's easy reading and great fun.


Visualizing Object-Oriented Software in Virtual Reality
Saturday, October 23, 2004

The authors describe Imsovision, a virtual reality based software development environment. Imsovision provides a visual representation of a UML class diagram augmented with some metrics information (e.g., the number of lines of code in a method). Though it can be used on the desktop, the tool is designed to for a Virtual Reality CAVE (a choice which would seem to limit its usage to software shops with truly amazing budgets!) This paper reports on the tool's architecture, discusses how they map UML attributes to visual ones, and ends with a few examples. No experimentation is provided to show that this tool is better for software development or system learning than others. In short, it is a bit of a disappointment. The state of software development still sucks and there are many reasons why. Tools are certainly not the main reason that buggy, shoddy and unstable software is more the norm than the exception. Better tools can, however, help improve the situation. It's not clear to me that we should be worrying about 3D visualizations when our editing environments are still clunky and weak.

Oh well, I'll go back to my corner and keep grumbling to myself...


A Knight of the Word
Monday, October 18, 2004

I haven't read Terry Brooks for ages and ages. The various Shannara books were fun but too similar and, well, deus ex machina to be completely satisfying. So it was a with a bit of surprise that I found a darker, more mature Brooks dealing with humanity's current crises (albeit in the realm of fantasy). The characters are still a bit too earnest and take some situations too seriously but Brooks has definitely improved. I look forward to reading or listening to the first book and, someday, the third.


STREAM: The Stanford Data Stream Management System
Monday, October 18, 2004

When I think of databases, I think of big datasets being altered by transactions and queried for reports. Traditional databases like this continue to grow, but there are several new members of the database family that tweak some properties of the standard relational model. One of these is Data Stream Management Systems (DSMS) which attempt to corral the moving web of interactions in which we all participate. The goal is similar to that of moving from batch algorithms to incremental and on line ones.-- we want to compute answers in (near) real-time without having to build up big tables that we're going to throw away as soon as the query is complete (come to think of it, this is analogous to the collect versus map question). We also want to have answers available all of the time.

The DSMS discussed in this paper uses SQL-like language called the Continuous Query Language (CQL). One of it's primitives is setting the size of the window through which a stream of data is viewed. This can be determined by either a record count (show me 300 records) or via a time period (show me all the records in the last 2-minutes). The fun begins when working on a system with many simultaneous queries running against multiple, often bursty, streams. How can the queries be optimized against time and space? Can estimates be provided when the load gets too high? Can the queries be distributed across machines? How do crash protection and recovery change when running in a streaming environment?

This paper takes on portions of the first two questions and leaves the last two for future work. Stanford has found a number of nice bits and pieces to let queries share the load and to rationally drop records as streams become clogged. They've also built a well engineered system (at least on its surface level) that allows for monitoring and introspection of the system and its queries. The final two questions are interesting because the streaming model is different enough from standard RDMSs that its not clear whether or not all of the usual suspects are invited. For example, it's not clear that ACID transactions are the right model when you're already assuming that the data is streaming by constantly.

This is interesting work and will become more so over time. As we grow more digitized, there will be a need to have systems monitoring multiple data sources in real time. For example, critical medical care could improve by tracking connections between instruments rather than only looking at each instrument in isolation. Of course, this technology will be a double edged sword -- some of the streams out there are better left unmonitored. How we can keep them that way is not a technical problem but it is a problem that needs solving.


Information Ecology: Open System Environment for Data, Memories and Knowing
Saturday, October 16, 2004

Bowker and Baker examine in the interconnections between memory, data, information and knowledge in the context of a study within the Long Term Ecological Research (LTER) community. In our time, databases have become fundamental but the "data [within them] never stands alone." Indeed, the memory of an organization exists in multiple interacting forms and includes both data and procedures. As organizations grow, the data and the interacting web of procedures grow with them. This becomes especially problematic when data must be shared across space and time; standards must be created and agreed upon, units must be unified, formats must be formed and all must be maintained. The obvious answer is to make heavy use of metadata. Here, however, the problem recurses -- how are to set standards for the metadata? Indeed, "the proliferation of metadata standards within environmental science [is] as significant as the proliferation of data standards themselves." "This suggests the need to accept that there are very real social, organizational and cognitive machineries of difference which continually fracture standards into local versions." The solution for this requires not more standards but a "careful analysis of the political and organizational economy of memory practices..."

The authors posit two dimensions: data and knowledge. Data can be local or global; knowledge can be tacit or explicit. This creates four quadrants:

  • Local data, Tacit knowledge: Data management
  • Global data, Tacit knowledge: Information management
  • Global data, Explicit knowledge: Domain knowledge
  • Local data, Explicit knowledge: Research knowing

Information flows from quadrant to quadrant with feedback across boundaries and with change within each quadrant. The whole forms an ecology of information. If the dynamic flux of this ecology is ignored, we "risk putting in place systems that create barriers to inquiry." The examples the authors present are suggestive and show that more than technology is required for database management.

Another insight of this paper is that the process of infrastructure building is far more complex than it first appears. It is hard to define what Information Managers do.

Software engineers write programs that can be demonstrated in conferences and written up in journals. Domain scientists produce data which can be run through a research protocol and published in journal. Information managers on the other hand service, manage, and design the flow of information (as do librarians). They take the materials -- organizational, technical and data -- which are at hand and make it all work together. Their work is rarely written about; when spoken of, it frequently has the 'what I did during my holiday' patina: it is too specific to generalize and seems too small scale to label important. It is the work of bricolage as much as work of engineering, in Levi-Strauss's (1966) terms.

Information management is a vital part of any system but we "don't have good ways of talking about [it]..." This process oriented work is "frequently invisible and rarely supported." In attempting to bring this work to the forefront, Bowker and Baker are performing a valuable service. As they put it, we live in a tension between homogenization and diversification. "The question then is not only 'with what epistemological and ontological frameworks shall we work?' but also 'how can we work at the intersection between different frameworks?" The goal of long term studies and real world data mining is not 'how can we capture the data?' but rather 'how can we build an open information ecology in which the changing data can live and prosper today and tomorrow?'

This is challenging paper from well outside the standard views of Computer Science. Perhaps that is why it feels so refreshing and correct to me. It is easy for technologists to view problems in simple terms but real problems and answers must live in contradiction with one another in a world open to change. I believe that the broader analysis of systems in political, organizational and essentially human terms will help us do better science and produce better answers for a complex world.


the Universe in a Nutshell
Tuesday, October 12, 2004

Even though they sound like dialog from Star Trek, Stephen Hawkins has a nice way with words. I read a lot of popular cosmology books back when I was younger but the field keeps growing, changing and getting a bit stranger every year. It's great fun hearing about theories that border on the fantastical and knowing that the whole story is probably even wilder than we've yet imagined. I'm not sure how much I learned listening to this but I definitely enjoyed it.


The Great Unraveling
Tuesday, October 12, 2004

I just finished listening to the abridged version of Paul Krugman's The Great Unraveling: Losing our way in the new century. Krugman is an economics professor and New York Times columnist and seems to know his stuff. I'm sure that people with a different political persuasion would find a lot with which to disagree. but to my mind, the picture he paints looks about right: Things are worse in America now than they've been for a while and they are apt to get worse.

The book is well written and worth reading (or hearing) if you care about economics and, frankly, we should probably all care about economics more than we do!


Writing for the Living Web
Sunday, October 10, 2004

Mark Bernstein, the architect behind Tinderbox, writes:

To an artist, the smallest grace note and the tiniest flourish may be matters of great importance. Show us the details, teach us why they matter. People are fascinated by detail and enthralled by passion; explain to us why it matters to you, and no detail is too small, no technical question too arcane.

This is right.

It applies equally to programmers and, I think, it explains why some of us choose Lisp-like languages.

His entire essay is worth reading.


On the structured data web as an infrastructure for web services
Sunday, October 10, 2004

Dyer first discusses web services as implemented via XML (tagging), SOAP (transfer), WSDL (description) and UDDI (discovery). He has no issues with XML, WSDL or UDDI but finds SOAP overly complex (though he mentions that recent changes may obviate some of his complaints). It would make more sense, he argues, to use relational database management systems (RDBMSs) instead of SOAP as a transport protocol. For one thing, SOAP rides on top of HTTP which raises security concerns and requires dealing with a stateless protocol. RDBMSs, on the other hand, are designed with sessions in mind and make it easy to query, add, update and delete. Dyer's Structured Data Web (SDW) is based on "information elements" -- a variable name and its value and metadata in the its current "problem solving episode". These episode are defined by an application and problem instance (i.e., an integer).

The SDW trades space for time and uses a un-normalized schema. There are 3 main tables: one to hold the attributes of information elements, one to hold a complete history of events for each element (no deletes are made on this table) and one for describing applications and their variables by user. To use the SDW for RPC, both clients and servers can include information elements in their messages. Sentinels and other services can be built on top of this base. The SDW is therefore about semantically rich communication; transport can still use SOAP but these SOAP packets will be significantly simpler because the SDW takes care of all of the versioning and namespace issues.


MacArthur awards
Sunday, October 10, 2004

The last few weeks have been more crazed than usual for me - work, sick kids, sick spouse, sick self. I've a small stack of papers to write about but haven't had the time yet to do so. I'm hopeful that the Columbus day holiday will let me clear up my backlog.

The MacArthur foundation announced their 2004 winners a few weeks back. It's an auspicious group of men and women and includes poets, doctors, artists, computer scientists and an MIT engineer named Amy Smith who spends her time

cobbling sophisticated, life-enhancing devices from inexpensive materials for people in areas with little access to technology and even fewer resources to obtain it.

It's hard to describe how wonderful I felt reading this description. It's exactly what I wish I was doing with my life. Work like this provides hope amidst the spiraling ugliness of politics and global environmental crises.

All of the winners are the kind of people we need more of going forward. Though I wouldn't want to ruin their personal lives, I wish that the media did more to publicize these people and their dreams. Then maybe, just maybe, more children would grow up wanting to make a difference instead of wanting to make a billion.


Excerpts from the RNC
Monday, October 4, 2004

This has nothing to do with CL, but it is an excellent use of video editing and good timing to boot.


Behavior changing optional arguments are bad
Wednesday, September 29, 2004

Dan Corkill has been talking to me about some of the philosophy behind the redesign of GBB in GBBopen. One of these is that "Behavior changing optional arguments are bad" because it's too easy to add or forget the argument and surprise yourself later. The compiler won't be able to tell you that something is amiss and the code will look okay but suddenly, things won't be as you remember them. Coincidentally enough, I came across a bug in GBBopen later that day that Dan tracked down to the only remaining optional argument in the source! I've haven't looked at my own code yet but know that I've never liked using optionals for things like find-class's errorp - I'm more comfortable with keyword arguments (even though they are putatively slightly slower). In any case, I'm going to look over my code base and see how I've actually been coding.


Quotes I note
Sunday, September 26, 2004

I've added the obligatory quotes page to this site. You can get to it in the "navigation" bar at the bottom of every page. It's pretty biased but that's what quotes are for.


The 100 Days
Friday, September 24, 2004

Only one book remains for me in Patrick O'Brian's most excellent series of historical fiction. Although its best parts were exceptionally sound, the 100 days seemed rather uneven to me. Perhaps I wasn't paying attention, but I kept coming across the conclusions of plot lines whose beginnings I did not recall. Still, its attention to the details of both the drawing room and the vasty deeps places this book happily amongst its many siblings.


Weblog woes
Wednesday, September 22, 2004

Because unCLog is cobbled together out of shoes and ships and sealing wax, I've been running into some RSS problems lately. I've also had no time to fix them because I'm on the road. My apologies. I hope to get things up and stable again very soon.


A few potshots at Eastwood
Wednesday, September 22, 2004

Yesterday I wrote a few notes on a putative Common Lisp Lint I've dubbed Eastwood. Christophe Rhodes sent me an e-mail proving that I'm not nearly the subtle thinker I think I am [smile]. It's not that Eastwood is a bad idea, it's just that my examples were either too glibly presented or just downright bad. I'll try again:

  • Unbound exported symbols. Christophe points out that there are a lot of reasons to export unbound symbols: "It could be part of the syntax of a macro, or a method combination qualifier, or a slot initarg. It could be a documentation type, or a type name. Not to mention an unbound special." I could pretend that I had thought of all these and elected to ignore them for ease of presentation, but that would be lying. On the positive side, however, this is an even better example of why Eastwood needs to treat each warning as a persistent object so that the lint process becomes a dialog rather than a monologue. I still think that unbound exported symbols are usually a mistake so I want my tool to question me once, let me tell it I know what I'm doing and then get out of the way. This opens up a lot of other issues surrounding what conditions should cause Eastwood to re-issue a warning you said to ignore, etc. But it's more fun to set your initial goals high!
  • Doubly-non-destructive functions. Firstly, my example was terrible since #'append doesn't necessarily copy its final argument. I knew that but overlooked it in my haste. A better example would have been (remove-duplicates (mapcar ...)). On the other hand, Christophe also points out that "... consing is not the universal badness that one might assume: consider the effect of a GC occurring after the APPEND and before the REMOVE-DUPLICATES. The result of the APPEND is not garbage, so it's moved into a higher generation, and protected with a write barrier. If you DELETE-DUPLICATES, you will hit the write barrier, invoke a kernel trap, unprotect the page, and will need to do extra work at the next GC; if you REMOVE-DUPLICATES, no writing to old generations is involved. So the consy version could easily end up being faster, in this special case at least." I can't say that I've always wanted to learn about the innards of garbage-collect strategies but this sort of deeper interaction is definitely a reason to.
  • Mistyped function names. Here, at least, Christophe is in complete agreement with me -- one for three is pretty good in baseball. He points out that SBCL already style warnings for this sort of thing. I think that SBCL's style warnings are a good idea and they are definitely a subset of the sort of things that I think Eastwood to notice. The problem with SBCL is that every warning is, in essence, treated with the same level of repetitious urgency. This is true for many of our interactions with computers: "Are you sure you want to quit?". If we see the same warning or dialog over and over again, we learn to ignore it without thinking. As far as I know, this general UI issue hasn't been solved beyond the advice to reduce the number of dialogs and make every operation reversible. I believe that general solutions will require tools with significantly more intelligence and ability to learn than has been the norm. It also requires the sort of subtle and deep thinking that Christophe displays.

Harper's Index is quirky, fun and thoughtful
Tuesday, September 21, 2004

Ugh

Chicago Mayor Richard Daley announced a new municipal surveillance system that will use 2,000 remote-controlled cameras that "are the equivalent of hundreds of sets of eyes."

Cool

Scientists were developing a stinky robot that attracts flies, which it then digests and converts into electricity.

How do they know which comes first, the headaches or the diaries?

British psychologists warned that people who keep diaries are more likely to suffer from headaches, insomnia, digestive complaints, and social problems.

You can find the whole thing at http://www.harpers.org.


Eastwood
Tuesday, September 21, 2004

I've seen several reference to lint like Common Lisp tools recently (here on the CLiki, an old one by Barry Margolin on the CMU AI repository and others). I think this is a good thing. I've just started thinking about my own tool which I think I'll name Eastwood as in CLint Eastwood. Here are a few examples of things to look for

  • Exported symbols symbols with no binding. If it's not bound, why export it?
  • Doubly non-destructive function. It makes no sense to call (remove-duplicates (append ...)).
  • Function names like initial-instance. I created this function the other day when I was trying to define initialize-instance. I've actually done this several times and it's not an easy bug to track down!

Doing this right (IMHO) requires declarative ways of saying that such and such a thing is wrong. Otherwise, you'd end up with a tool that was a hodgepodge instead of a help. Secondly, a Eastwood should maintain state across runs over the same code base. The user should be able to say that some particular warning is not a problem. Otherwise, the tool will become an irritant as it generates the same warnings again and again. Finally, my third example in the list above shows that Eastwood needs to run concurrently during development, not as a separate phase of the development process.


Particularly Helpful Editor Extension
Monday, September 20, 2004

Though I personally dislike when people use the 'X' in a word like 'exposition' to make their acronym work, there are times when it seems almost unavoidable. I've been noodling around for the last few weeks with what I call PHEX (the Particularly Helpful Editor eXtension). The idea is dog simple: First, you start where the mouse is clicked and gather up nested "contexts". Then you use these contexts to figure out all of the commands that seem relevant. Finally, you present these commands in a popup menu and let the user do what they want. The only additional wiggle is that it is (supposed to be) easy to add additional PHEX commands so that you can integrate PHEX with the other tools at your disposal.

I'm on a trip down to Washington D.C. right now to a KDD workshop (no, not Knowledge Discover in Databases, it's Knowledge, Discovery and Dissemination) so I'm hopeful that I can find some time to make a bit more progress. The main todos are to make the popup menu more presentable and to slightly improve the syntax of the define command macro. Once I get that done, I'll stick the code out there. Of course, I'm writing this in Macintosh Common Lisp so unless you use FRED, you'll be out of luck. Of course of course, anyone that knows how to write EMACS code could easily do this for other platforms.


A Free Implementation of CLIM
Monday, September 20, 2004

I've finally gotten around to reading this paper by Strandh and Moore. It's a nice introduction to the Common Lisp Interface Manager (CLIM) and the LGPL version they helped write. I've read the CLIM manuals and played with it a little bit but have never made the opportunity to force myself to dig in and really learn it. This paper makes me want to do that which, I think, is high praise indeed.


The Yellow Admiral
Saturday, September 18, 2004

I found the 18th Aubrey/Maturin novel of Patrick O'Brian significantly more satisfying that the 17th. The Yellow Admiral recounts a difficult time in the life of Jack Aubrey - one in which he fears he may be passed over for promotion and thus become yellowed. The novel spends delicious time on both the naval and home affairs of the characters, developing and deepening our appreciation for them and for O'Brian's masterwork.


Ann Coulter and Jon Stewart on the other hand
Friday, September 17, 2004

Are scary and funny respectively in their interviews at Amazon.com. Favorites quotes:

Amazon.com: How important is this presidential election in the larger context of the Republic and its history?

Ann Coulter: Insofar as the survival of the Republic is threatened by the election of John Kerry, I'd say 2004 is as big as it gets.

This is also her answer for two other questions (out of six!). She also talks (apparently with glee) about crushing the Taliban and Al Qaeda (she must be missing some of the recent bulletins from Afghanistan and the rest of the world), North Korea, Pat Leahy and Carl Levin. I guess she just hates America or something.

Jon Stewart, on the other hand, knows that much of what we hear is, shall we say, a tad stretched.

Amazon.com: What would a Kerry administration mean?

Stewart: JOHN KERRY PLANS TO RAISE TAXES ON OUR TROOPS IN ORDER TO SUBSIDIZE FREE, GAY HEALTH CARE FOR TRIAL LAWYERS AND TERRORISTS.... THEN ABRUPTLY SWITCH TO THE OPPOSITE COURSE.

There are also interviews with Molly Ivins, Gore Vidal, and others. Now excuse me, I've got some papers to read on finding links in relational data.


I find Talking Points Memo excellent and depressing
Friday, September 17, 2004

Words and excuses meet incompetence, chaos and death. That's what this election is about.

Josh Marshall

His full piece is here.


GBBopen
Thursday, September 16, 2004

Though I ported it to both MCL and OpenMCL a while ago, I've just recently begun to actually use GBBopen, an open source blackboard framework. One way to think of GBBopen is that it combines a nice set of general Lisp utilities, a greatly enhanced class definition language, an in memory database and an opportunistic control mechanism into a synergistic whole. It's very nicely put together (and beautifully written; Dan Corkill knows his Lisp and his MOP and it's great fun to read such cool code!) but it's never easy to learn the ins and outs of a new framework.

I know that each time I learn a new way of expressing myself, I find myself knowing what I want to say and even roughly how to say it but still banging my head up against the tightly bound constraints of syntax. Moving beyond this to fluency is part of the joy of learning something new — I hope we can all remember how great it feels to go from struggling with a language / framework (let's flip through that manual again, shall we...) to being able to sit down and just write. Part of this is learning the idioms (syntactic patterns) of the language. Part of it is being willing to keep experimenting even after you've found a way to do something in the search for a better way of doing the same thing! As an aside, I find this facet of language learning and learning in general to be deeply confusing: how do we know that there is a better way? What is it again a problem situation that leads us to think that we can improve?

I'm going to stop writing here because I'm feeling a bit sick and very foggy. I'll keep you updated on my progress.


Learning from Accidents
Monday, September 13, 2004

I just finished another Dan Bricklin essay whose concern is how we, as designers and engineers, can learn from failures. He covers Charles Perrow's "Normal Accidents", Trevor Kletz's "What Went Wrong? Case Histories of Process Plant Disasters" and the 9-11 report. To quote from Normal Accidents:

The main point of the book is to see...human constructions as systems, not as collections of individuals or representatives of ideologies. ...[T]he theme has been that it is the way the parts fit together, interact, that is important. The dangerous accidents lie in the system, not in the components. (page 351)

It's a strong essay and worth serious thinking.


Notes from software that lasts 200 years
Friday, September 10, 2004

I just finished an essay of Dan Bricklin's that he wrote back in July of 2004. He talks about the un-addressed needs of Societal Infrastructure Software - the glue that is holding more and more aspects of our world together. To quote

We need to start thinking about software in a way more like how we think about building bridges, dams, and sewers. What we build must last for generations without total rebuilding. This requires new thinking and new ways of organizing development. This is especially important for governments of all sizes as well as for established, ongoing businesses and institutions.

The structure and culture of a typical prepackaged software company is not attuned to the needs of societal infrastructure software. The "ongoing business entity" and "new version" mentality downplay the value of the needs of societal infrastructure software and are sometimes at odds.

It's a thoughtful and well written piece. The biggest difficulties are not technical, they are personal and managerial. How do we get governments to pay for long term solutions? How do we get people to demand software that works and lasts?


Thief of Time
Thursday, September 9, 2004

Suppose that the smallest amount of time is the time is takes to get from then to now and that the universe is destroyed and recreated at every tick. Now suppose you build a clock that ticked at precisely this rate. The clock couldn't both tick and be destroyed and recreated at the same time now could it? Of course not, so the only logical solution is that all time would have to stop. That's just part of the wonderful, funny and profound plot of Terry Pratchett's Thief of Time. It's a great book. I started and finished it last night. Where did all the time go?


The Commodore
Wednesday, September 8, 2004

Another glorious continuation of Patrick O'Brian's Aubrey / Maturin saga dispatched. I found this one slightly less enjoyable than others I've recently read. I'm not sure if it was the book or some of the background noise in my own life. Regardless of that, I'm pressing on to the Yellow Admiral.


Fear, Anger, Distortion
Wednesday, September 8, 2004

I know that this isn't a political blog and I don't want it to become one, but I recently wrote a post that talked about the current republican propensity to use the politics of fear, anger and distortion. I claimed that this was something with which everyone would agree. Several people took me to task on this and I backed off slightly. Just recently, however, I came across several more quotes and misquotes (mostly from Dick Cheney) that just shouldn't be ignored. Fear! Anger! Distortion! The republicans are hip deep in it.


Using Linear Algebra for Intelligent Information Retrieval
Tuesday, September 7, 2004

Berry et. al. present a nice example study of how a little linear algebra (OK, it's a lot of linear algebra) can go a long way. Their example shows how Latent Semantic Indexing (LSI) discovers the non-lexical connections between words based on their context (see this older posting too). LSI proceeds as follows:

  1. First, create a matrix representing, for example, which words appear in which documents.

  2. Then use singular value decomposition (SVD) to split that matrix into three pieces: one representing the words, one the documents, and one the inter-relations. You usually also want to weight the words and documents for significance (this is where domain knowledge comes in handy)

  3. To query which documents are most similar to a new query, one turns the query into a vector representing which words are in it, scales it appropriately and then compares it to each document (using, for example, a cosine similarity measure).

Amazingly, that's it -- except for interpreting the results! SVD is similar to principle component analysis (PCA) in that it can be used to reduce the total number of dimensions under study. The resulting matrixes breakdown the original relationships into linearly independent factors and we can use the k-largest ones to produce best estimates with less computation.

The authors go on to discuss fast methods of computing and updating SVD matrixes and present a laundry list of applications including information retrieval, information filtering, cross-language retrieval, modeling human memory and dealing with noisy inputs.

The best thing about this technical report is that it carefully goes through the mathematical steps with good examples, tables and charts to make the path clear. You don't find this often in published papers because the scientific method is supposed to brush all the work under the rug or behind the bed so everything looks pristine when the guests come to visit.


Social Interfaces
Monday, September 6, 2004

JoelOnSoftware consistently has interesting things to say and today's essay on the social interface is no exception. Aside from excellent examples, I'd say that his main point is that:

Whereas the goal of user interface design is to help the user succeed, the goal of social interface design is to help the society succeed, even if it means one user has to fail.

That's contentious, but true. Society's work because they collect garbage (see, I even managed to bring Lisp into the discussion -- however obliquely!) and getting open groupware to work is going to require a lot of new thinking! It should be fun.

(note that this is the second posting of this. Sorry, had a little kernel panic, ahhhh)


John Kerry, again
Friday, September 3, 2004

Well it's nice to know people are actually reading this even if they do disagree with me!

I've been having so good e-mail dialogs with several people who pointed out some errors in my last post. Looking back, I think that it is impossible to disagree with the statement that I was wrong that it was hard to disagree with the statement that the Republican party, on the whole, uses the politics of fear, anger, vitriol and distortion.

I believe that that is true but many disagree and I was silly to think otherwise. My apologies and my thanks to those who took the time to set me straight.


John Kerry
Friday, September 3, 2004

(Update)

Regardless of your political affiliation, it's hard to disagree with the statement that the Bush re-election campaign uses the politics of fear, anger, vitriol and distortion. I'm not a fan of Bush (far from it) and I'm happy that John Kerry is finally making his own case:

"For three days in New York, instead of talking about jobs and the economy, we heard anger and insults from the Republicans. And I'll tell you why. It's because they can't talk about the real issues facing Americans. They can't talk about their record because it's a record of failure. I believe it's time to move America in a new direction; I believe it's time to set a new course for America." - John Kerry, September 3, 2004

I hope that we can start talking about the issues facing our country (war, Iraq, poverty, social justice, civil rights for all, health care) instead of futzing around making random accusations about things that happened 40-years ago.


Seeing Voices: A Journey into the World of the Deaf
Monday, August 30, 2004

I'm halfway through the abridged version of Oliver Sacks book on tape. It is wonderful: interesting, thought provoking, and challenging. The historical details are fascinating enough all by themselves but Sacks also delivers his potent blend of science and psychology. Of particular interest to me is a fact that I knew but had never really considered: American Sign Language (ASL) is a complete language embedded in two-dimensional space. It therefore makes use of a grammar whose surface structure differs radically from that of spoken languages. I'm not sure how many computer scientists and the like have ever looked at it deeply but it seems to me that it might be an area ripe for deeper investigation.


A Walk in the Woods
Monday, August 30, 2004

This is the first book I've read (actually heard -- it was on tape) by Bill Bryson. He is very funny in that dry and self-effacing style that slowly grows on you. This is a great book from many perspectives: as narrative, as natural history, as auto-biography, as the story of a friendship and a life. Bryson nicely mixes his tale with the tale of the Appalachian Trail (the AT) and both wind along pleasantly through trials and tribulations from Georgia to Maine.


Talk about network effects
Tuesday, August 24, 2004

We've all seen firsthand how interconnected systems can turn molehills into mountains. The great power failure of 2003 being only one of the more recent examples. The quote below, however, shows just how integrated we're becoming.

"A particularly knotty problem is striking foreign military computer systems that are linked to commercial systems. During last year's assault on Iraq, planners were concerned that attacking the nation's integrated defenses may have created cascading failures that could have reached back into the international banking system." (From Aviation Week & Space Technology, August 16, 2004, Pg. 24)

Buddhists say that we are all connected to all (Interbeing in Thich Nhat Han's sense). As this becomes less and less metaphor and more and more physical, perhaps war will become a true impossibility.


Clustering and preferential attachment in growing networks
Saturday, August 21, 2004

Mark Newman looks at time elapsed properties of scientific citation networks in physics and biology. He finds that the odds of two scientists collaborating increases with the number of scientists that the two had collaborated with in common. He also finds that the odds of a scientist forming a new collaboration increases with the total number of collaborations that the scientist has. These empirical results fit well with existing theories of why real world networks show clustering (its more likely that my two friends are friends) and power-law degree distributions (most people know a few people but some people know everyone!).

As usual, the math is nice (though in this case, it managed to stay fairly close to my level of comfort) and the writing is lucid and interesting. It's older research now but the study of the time evolution of networks is still very much in its infancy and this isn't a bad place to start reading about it.


An Introduction to Latent Semantic Analysis
Wednesday, August 18, 2004

Though many have believed that its popularity stems only from having a wonderful name, Latent Semantic Analysis (LSA) turns out to be both surprisingly useful and possibly an accurate representation of what goes on inside our heads. Landauer et. al. show this by summarizing a large body of research comparing LSA with humans on tasks such as categorization, estimating coherency, semantic priming and even scoring essays (!?).

LSA takes as input a matrix representing the occurrence of, for example, words in phrases or phrases in documents or, most broadly, things in collections. It uses singular value decomposition (SVD) to break this matrix into three: one representing the rows, one the columns and one diagonal matrix of "weights". This representation can then be compressed by reducing the number of matrix dimensions. The "distance" between words/phrases/things is then determined by looking at the compressed analogue of the original matrix. The decomposition and compression steps force the matrix to reveal the hidden connections between the things (hence, Latent Semantics).

As the authors say, you can treat LSA as a useful technique regardless of whether or not you believe the larger claim that it (or something very close to it) is actually how our brains function. They do, however, present an impressive array of evidence that LSA matches human performance pretty darn well.

Perhaps the most surprising part of LSA is that it works so well without taking syntax into account — all LSA looks at is inclusion of things within groups. The order of these things doesn't matter. I'd be interested in finding domains where LSA failed because syntax really was important. It would also be fun to look for incremental algorithms (and/or ones that could be reasonably implemented in wet ware). In any case, it's a technique I want to add to my toolbox (Lisp programs coming someday).


The link prediction problem for social networks
Monday, August 16, 2004

This is a beautiful little paper — one of those you wish you'd written because its such an obvious idea (at least in hind sight). Liben-Nowell and Kleinberg use a wide variety of topological measures to try and predict the links you'll see in the future based on the current state of a graph. For example, if I know all of the authors that collaborated on a paper over the last three years, how good a job can I do predicting which new collaborations I expect to see this year.

The two allow only the use of structural graph information — neither vertex nor edge attributes need apply. This is stringent but makes for interesting work. How much does the topology allow you to predict what to expect? They use measures based on path distance, neighbor vertexes, random walks, PageRank and similarity ranks. They compare these to a random predictor and use the two simplest measures (path length and common neighbors) as benchmarks. For data, they use the physics pre-print archives.

Not surprisingly, it is easy to do better than the random predictor. More surprisingly is that it is hard to do all that much better than common neighbors. Even when a more complex measure does score higher, it is often only on some of the data sets!

The paper points in a number of fascinating directions — many of which I didn't even know existed: Latent Semantic Analysis, unseen bigram estimation and the estimation of distribution support are only three of these.

Future work includes making some of these more complex measures faster and, I would think, seeing how much better you can do if you take attribute information into account.

Well written, pithy, great links! All in all, I'd highly recommend this paper.


Dynamic Network Analysis
Tuesday, August 10, 2004

Dynamic Network Analysis (DNA) blends traditional Social Network Analysis (SNA) with multi-agent simulation, cognitive modeling and machine learning to produce a tool appropriate for changing and uncertain (probabilistic) environments. DNA is applicable, for example, when modeling terrorist organizations: the agents and their relations are partially known at best and intentionally misleading at worst. The network is also subject to constant change in makeup, linkages, resources and goals.

To handle such problems, Carley puts forth her own meta-matrix approach which seems to be no more than the injunction to examine multiple networks at once (e.g., people/people, people/knowledge, people/events, knowledge/knowledge, knowledge/needs, events/events, and so on) to create a whole. She claims that most traditional SNA metrics have little value in such large multi-faceted networks and that new metrics are therefore needed. One such metric is the cognitive load of agents in the network. This amalgamates interactions, coordination costs, events and learning / training.

Given s snapshot of such a matrix, one can then examine how it will change over time. For example, the people in the network may be born, die, move, and add and sever connections; the knowledge/resources network may expand with innovation, contract with amnesia and change with technical discovery; the relations in either network change from cognitive, social and political processes

DyNet is Carley's tool for analyzing networks in DNA terms. It very much a work in progress but seems promising and has been used in experiments to determine how different kinds of groups react to different kinds of isolation strategies.

Psychology and Sociology can be seen as another manifestation of the two poles of the Nature / Nurture debate: are we governed by who we are (our own agency) or by the roles we play (our social network). It's fairly obvious at this point that neither answer is accurate and that our agency and roles co-evolve (cf. Susan Oyama's work for a wonderful and deep discussion of this in terms of the phenotype / genotype distinction). DNA may be too affiliated with other connotations for the term to catch on, but the goal of modeling complex systems as agents with social roles that use resources, pass information and change their affiliations is, in my opinion, spot on.


the Wine Dark Sea
Tuesday, August 3, 2004

Book16 in Patrick O'Brian's amazing saga leads Aubrey and Maturin through political intrigue, stunning alto plano vistas, treacherous seas and even a volcanic eruption! It remains lovingly and beautifully written and leaves the reader hoping for more. On to The Commodore.


What a metaobject protocol based compiler can do for Lisp
Tuesday, August 3, 2004

Compiler macros are a simple way to give a compiler advice about better ways to generate code; declarations and pragmas are another. In this ancient paper Kiczales et. al. outline a third that aims significantly higher. A metaobject protocol exposes a framework for some section of a system such that users can tune this section in different ways. The classic MOP book talks about expanding a language from a single point to an entire region of design space. The CLOS MOP provides mechanisms to adjust the allocation and behavior of objects so that, for example, one can efficiently implement both tiny objects with few slots and massive ones with more slots that any one would care to think about. The point is not being able to have big and little objects in the same system (anyone can do that!). The point is that these big and little objects can both be efficiently managed in the same system — a much harder proposition.

The compiler MOP presented here is a prototype attempt to do the same thing for a scheme compiler: let programmers extend and tune portions of the language protocols consistently, safely and efficiently. MOPs let a programmer have and eat their cake; they can mold the language to suit their needs without a loss in efficiency. This molding goes beyond the level of macros. Macros let you express things more succinctly and effectively but they do not change the fundamentals of the language. MOPs, on the other hand, actually let you stick your hands into the guts of the implementation and make adjustments to what goes on inside the box. A good MOP lets you make these adjustments without losing your hand!

This paper is clearly early work and work that has mostly been abandoned (by Kiczales at least) in favor of aspects (and, horrors, Java). Nonetheless, it shows the potential of the approach with good examples and motivation. There is also a nice summary of related work and pointers to what needs to be done.

My own opinion is that MOP work like this needs to be combined with effective IDEs and with Artificial Intelligence classification and expert-like systems in order to put together a programming environment that adapts to the programmers and helps make order of of chaos. Such a project would be a major research effort but its benefits could be extreme.


Could it be a big world after all
Tuesday, August 3, 2004

I've long been a bit bothered by the conclusions Stanley Milgram reached in his famous "small-world" studies back in the 1960s. Reaching the global conclusion that every human is only several relationships away from every other human on the basis of a few studies with poorish completion rates seemed a bit over the top. Of course, I never did anything with my doubts whereas Judith Kleinfeld not only tried to replicate the study but also did extensive literature reviews and went through Milgrim's notes at Yale.

She finds that although Milgram's data does support a sort of small-world hypothesis, it also supports the hypothesis that we remain fundamentally separated by class and race. She also investigates the psychological strength of the hypothesis: why do we want to believe that "it's a small world after all".


Simpson's Paradox
Monday, August 2, 2004

One nice thing about attending academic workshops is that you meet lots of smart people and learn new things. Today someone mentioned Simpson's paradox (really just a conundrum). Simpson's paradox occurs when group A scores better than group B on every separate trial yet group B scores higher when all of the trials are combined. Though surprising it's really just a matter of not being able to add rates (because they are fractions with differing denominators!). The web site I linked to has several other good examples. It's worth looking at.


Why Social Networks are Different?
Wednesday, July 21, 2004

Newman and Juyong investigate why social networks differ from most others in that the degrees of adjacent vertexes in social networks tend to be positively correlated whereas the they are negatively correlated in others. Why is it, in other words, that people with lots of friends tend to have friends with lots of friends but that sites with lots of links tend to link to sites with fewer links?

In a beautiful bit of mathematics that, sadly, is mostly beyond my current ken, they show that simply adding organization (grouping) to a network creates a strong tendency towards the positive correlation. This grouping also can explain why social networks have higher clustering coefficients (i.e., it's likely that I am a friend of my friend's friend).

They conclude their paper with two investigations: one of paper collaborations among physicists and one of links between members of boards of directors. In the first case, grouping alone appears to account for all of the correlation whereas boards of the directors have an even higher correlation than the grouping can explain. In other words, the particularly high correlation among directors appears to have an actual social component.

Now I need to go back and re-read the paper with an eye towards actually understanding the math!


Principles and Applications of Continual Computation
Wednesday, July 21, 2004

That paper provides a pedestrian treatment of an uplifting idea: use excess computational power to help answer future questions. Although this isn't an original idea — I know of web browsers that attempted this at least 10-years ago and some of these were pretty sophisticated. — Horvitz does frame the problem in a formal enough way that he can prove things about how to minimize the total computational delay or how to maximize the quality of answers. These proofs fit into specific scenarios constraining the kinds of problems they can handle (e.g., are they all or nothing or do they fit into an anytime algorithm framework). He also takes into account memory, caching and the real-time nature of some domains.

Looking back over the paper now, I'm not completely certain what it is that I don't like about it. It is well written, complete, and moderately formal. Perhaps that is the problem: its treatment is too formal and sucks the life out of something that should be fun. I react to it in the same way I react to statistical treatments of language learning — yes, we need the formality and some parts of language learning are statistical but statistics aren't the whole story. All the talk of policies and optimization leaves me thinking that the focus is too syntactic. What about semantics and pragmatics? They are harder to talk about and perhaps impossible to formalize sufficiently but they are where our computations begin to matter.

This is a stretch (and a reference to only a so-so movie) but until we computer scientists and artificial intelligence researchers understand the anger of Will Smith's character in I, Robot, we're going to keep creating tools and applications that are brittle and that fail to meet the whole wonderful and wacky world we live in.


Models of the Small World: A Review
Wednesday, July 21, 2004

Newman clearly and quickly covers several different models of small world networks in this 2000 J. Stat. Physics paper. Small world networks meld the properties of random graphs and complete lattices in a way that goes beyond the properties of either. They get their name from the idea that we all closely connected to one another by "six degrees of separation" (more or less). Newman nicely lays out the math (although, once again, I'm forced to take much of what he says on faith... sigh) and presents examples and pointers to the wide variety of investigations that had already been carried out. Of particular interest to me were the discussion of disease and information transmission, a connection between scheduling problems, computational complexity and, of all things, Potts antiferromagnets and a pointer to world on neural network architectures where small graph networks were the only ones to possess both coherence and fast response to changes in stimuli. In brief, this is another excellent paper and, in spite of its age, it seems to remain a great starting point for understanding small world networks and finding links to the research done on them.


Christophe Rhodes on Benchmarking
Wednesday, July 21, 2004

Christophe Rhodes presented a nice overview of CL benchmarking at the First European Scheme and Lisp Workshop. I couldn't make it to the workshop and I've only skimmed the slides, but I couldn't help but notice the line "Can perform experiments to find things out, even on computer programs." As I work for the guy that wrote the book on the subject, I couldn't agree more!

Update: fixed three spelling mistakes -- ugh.


Too Circular?
Sunday, July 18, 2004

Antonio Menezes Leitao (my apologies for losing the accents in translation) has received mention in several other CL blogs thanks too some wonderful papers that showed up on the ALU website. Mentioning him here is probably a bit too incestuous but his Increasing Readability and Efficiency in Common Lisp is a masterful exposition of what you can do with compiler macros. His work shows just how impoverished my deprecate is!


What's been happening
Sunday, July 18, 2004

As I hope you already know, the Lisp blogging scene has been taking off of late. I expect you already know about Planet Lisp. It's a great blog aggregator of all blogs Lisp. Dave Robert's has the wonderful Finding Lisp, there's John Wiseman's Lemonodor, the always interesting Bill Clementson, and there are 23-others!


Been too long
Sunday, July 18, 2004

The last few weeks have been far too busy and my blogging has fallen off the edge of the world. I haven't figured out how to set up my own Blogger or what have you so I don't have any way to get comments. I think I'd like to enable them though, otherwise the writing feels a bit too solipsistic.


the Truelove
Saturday, July 10, 2004

Just finished book 15 in Patrick O'Brian's epic Aubrey / Maturin saga. What can I say; They are just a joy.


Class fusion
Thursday, July 8, 2004

Maybe it's just me, but I keep running into situations where the concrete classes in my system are composed of lots and lots of mixin classes from on high. Sometimes I handle this by using has-a instead of is-a and making each concrete instance a sort of manager who delegates its responsibilities out to its various mixins. Though flexible and a handy design "template" (dare I say pattern anymore), this can get to be a lot of work and involves some tedious bookkeeping. Therefore I sometimes do something completely different that most languages don't allow.

In Lisp, I can define new classes dynamically (and easily) and then make instances of them. When I want a new instance, I pass in a list of classes that the instance should inherit from (speaking loosely, I know instances don't inherit!). The creation function then tries to find an existing class that fits the need. If it finds one, it makes an instance of it and returns it. If an existing class cannot be found, it creates the new class and then makes an instance and returns it. It's almost like magic or, at least, like having your cake and eating it!

One quick example might make what I'm talking about more clear. My group has written a Common Lisp container library (which I'm going to package and put out there someday (soon (really))). To get an instance of a container, you call make-container:

(defgeneric make-container (class &rest args)
  (:documentation "Creates a new container of type 
class using the additional arguments (args).")
  (:method ((class symbol) &rest args)
           (apply #'make-instance class args)) 
  
  (:method ((classes list) &rest args)
           (let ((name
                  (or (u:find-existing-subclass
                       'abstract-container classes)
                      (u:define-class
                        (u:simple-define-class-name classes)
                        classes nil)))) 
             (apply #'make-instance name args))))

As you can see, calls to this with a symbol just go on and use make-instance. If, however, you pass in a list, magic happens. The find-existing-class function looks at all subclasses of abstract-container trying to find a existing class that inherits from all of the desired superclasses. If one cannot be found, then simple-define-class creates such a class on the fly (ain't Lisp cool!).

I've recently used this trick to add thread safety and some special printing capabilities to the container library. The thread safe code adds :around methods to acquire and release locks as necessary. Dynamic class creation makes it easy to add thread safety to existing container classes without creating a plethora of new classes that look like:

(defclass my-container-with-thread-safety
  (my-container 
   thread-safe-container-mixin)
  ())

Eventually, I'll get all this code out the under some Open Source license. If you have questions or comments before then, please let me know.


Graph Theory in Practice
Tuesday, June 29, 2004

This older two part (one, two) series by Brian Hayes from American Scientist is a nice introduction to ideas in graph theory for those who don't know much about networks and graphs. It covers a bit of history and limns the main steps in the recent history -- random graphs, small world graphs and so forth. One nice touch is the inclusion of some AT&T call graph data (a graph with vertexes for each telephone and an edge between vertexes for each call). Twenty days of call graph data make a graph with 290-vertexes and 4-billion edges. That's a big graph and that was four years ago. I expect that the same experiment done today would make an even bigger monster. If you're interested in the topic, I wrote a review of Mark Newman's Structure and Function of Complex Networks not too long ago.


Constructing Flexible Dynamic Belief Networks from First-Order Probabalistic Knowledge Bases
Monday, June 28, 2004

Glesner and Koller use Knowledge Based Model Construction (KBMC) to build Dynamic Bayesian Networks extended with a probabilistic First Order Logic (FOL). They also use an FOL to express probability distributions compactly. In particular, they can represent Conditional Probability Tables (CPTs) as decision trees. This is particularly helpful in asymmetric situations where, for example, variable A is dependent on variable B for only certain values of variable C.

Probabalistic FOL adds a probability distribution to the set of possible worlds (models). It is undecidable but one can make headway by restricting the full power of PFOL. For example, Haddawy and Krieger represent a class of Bayesian Networks (BN) using a subset of PFOL. They assume:

  • Each rule body must be the head of some other rule
  • All variables in the body also appear in the head
  • No two rules have ground instances (instances with no variables) with identical heads
  • The rules are acyclic.

Glesner and Koller relax the third rule and make an end-run around the fourth by adding time (so that the tail of a rule can point at the head of the same rule in the next time step). They use the by now familiar "canonical" ICI influence combination methods such as noisy-or. They attach the method to particular nodes in the belief networks (whereas I think that it only makes sense to attach them between pairs of nodes).

Given the rule set, the influence combination annotations and incoming evidence, they can construct a BN incrementally. All of the logic is sound and complete because it is based on Haddawy and Krieger and a similar proof works for both. Finally, the decision tree CPT representation makes making pruning decisions easier so that the resulting netwokrk doesn't get out of control too quickly.

This is more an "idea" paper than a "system" one. In this case, the ideas are quite interesting but there are a few too many "it would be straight forward" assertions to leave one feeling completely comfortable. For example, the pruning work had not actually been done and there are allusions to adding "roll-up" and handle multiple scale temporal reasoning that were as yet just twinkles in the authors eyes. The paper is well expressed, however, and adds another valuable flower to the Bayesian Model construction garden.


Causal Independence for Probability Assessment
Monday, June 28, 2004

A Bayesian Network (BN) encode probabilistic relationships into a directed acyclic graph where the vertexes are variables and the edges (can) represent causality. The network both represents expert knowledge and provides mechanisms for computation. At issue in this paper is how to efficiently represent certain types of causally independent knowledge. For example, there may be an effect that can be caused by n different things. In the general case, I would need to specify 2<sup>n</sup> different parameters to completely describe this. If these causes are independent, however, I may be able to describe the situation using only n parameters. This is a big win.

Details, details

Heckerman first describes the noisy-or model -- where each cause has some chance of turning on the effect, all the causes are independent and everything is binary (causes and the effect). The idea is to note that for each cause C<sub>i</sub> there is some probably q<sub>i</sub> that the effect won't happen even if the cause is true. (If the effect always happened, then it would be deterministic and you wouldn't need the noise!). Now suppose that two (and only two) causes i and j are true, then the odds that the effect will still be false is:

<center>1 - q<sub>i</sub> q<sub>j</sub></center>

In general, noisy-or works by taking the product of the q<sub>i</sub>'s.

In a 1989 paper, Max Henrion describes how to extend noisy-or to non-binary causes and effects and how to add a leaky node to account for that we know not what. In this paper, Heckerman shows how to generalize the framework so as to model noisy-max, noise-and, noisy-addition and so forth.

He then goes on to describe four specializations of the general model. These are: amechanistic, temporal, decomposable and multiply decomposable causal independence. Each has certain requirements and certain benefits. Heckerman stresses that "the preferred form will depend on the specific causes and effects being modeled as well as the expert providing the model." As usual, you cannot really use this stuff without thinking about it.


Just finished the Nutmeg of Consolation
Sunday, June 27, 2004

If you are familiar with Patrick O'Brian's epic Aubrey / Maturin saga, then you probably already have a smile on your face remembering the sweet Nutmeg's adventures. If you're not familiar and you like to read good novels, then I'd highly recommend the entire series. As for me, it's on to the True Love.


The Structure and Function of Complex Networks
Friday, June 25, 2004

Anyone interested in the science of networks should run, not walk, to their keyboard and touch type, not hunt and peck, for this paper right away. Mark Newman is not only an excellent physicist and mathematician, he also writes well (he also appears to be both young and handsome -- some people have all the luck). This review explains why networks are important, the techniques we have for classifying them, and the models we have to explain their structure, their growth and the processes that happen on them. It accomplishes all this in 74-beautifully written pages and includes an excellent bibliography (429-references!). Read more...


at a workshop...
Wednesday, June 23, 2004

I'm at a workshop all this week so no posts. Too sad.


The ties that Bind
Saturday, June 19, 2004

Two of Common Lisp's interesting features are destructuring-bind and multiple-value-bind. The first lets you easily pull a list apart using the same syntax that lambda lists use. The second makes it easy to use the multiple values returned by a Lisp function. The trouble with this is that:

  • They are too long to type and read (auto-completion helps here of course);
  • Unlike let, they only work on one list/function at a time. If you have several lists you want to destructure, your code is forced way off to the right -- this is particularly problematic in a presidential election year;
  • They don't work quite the same way. For example, you can use nil in destructuring-bind to indicate that you don't care about a certain variable. This won't work with multiple-value-bind. Instead, you need to put in a variable and then (declare (ignore it));
  • Finally, let, let*, destructuring-bind and multiple-value-bind are all really doing the same thing -- letting you attach convenient temporary names to things so that you can use them more effectively.

Bind is a macro that unifies the four Common Lisp forms. It lets you write code like:

(bind (;; destructuring
       ((name &key x y to-x to-y) (ensure-list slot-spec))
       ;; regular let
       (index (or primary-key key))
       ;; multiple-value-bind
       ((values next-x next-y) (step-path x y to-x to-y)))
  ...)

rather than

rather than:

(destructuring-bind (name &key x y to-x to-y)
                    (ensure-list slot-spec)
  (let ((index (or primary-key key)))
    (multiple-value-bind (next-x next-y)
                         (step-path x y to-x to-y)
      ...)))

bind continued

To my eye, the first is easier to read, more uniform, simpler to write and has less "syntax" than the second. The only thing that the current implementation of bind loses is the difference between let and let*. On the other hand, I've never understood why this difference exists. (Sure it might make it a little easier for the compiler to optimize something (maybe) but even in this case, a little analysis would make it clear whether or not to set the values sequentially or in parallel. I'd be happy to learn that I'm wrong -- or, at least, happy to learn! -- but I think the let / let* distinction is something that we could live easily without.

If you'd like to try bind, you can find on CLiki.net. If you have any questions or comments, let me know.


Lisp's Climbing Popularity
Saturday, June 19, 2004

It turns out that searching for 'unclog' on google now returns this weblog as the number one entry. That's pretty amazing. I'm sure you all realize what this means: Lisp is more popular than plumbing. Hey, it's something (smile)

Of course, another take on it would be that plumbers need to start linking to each other's pages...


Iteration and Collecting
Friday, June 18, 2004

Lisp provides multiple mechanisms to iterate over its built in collections (`mapcar`, `mapc`, `dolist`, `loop`, `maphash`, `with-hash-table-iterator` and so forth). Some of these operate on the underlying data -- pure iteration; others collect and return the results -- iteration for collection. Note that you can write an iteration function if you have a collection one:

(defun iterate-using (collection-fn iteration-fn dataset)
  (dolist (item (funcall collection-fn dataset))
    (funcall iteration-fn item)))

This isn't a good idea, however, because it always conses up a new list just to iterate over it and throw it away. A better plan is to write iteration functions for your data structures and then write the collection in terms of them. In fact, if you plan ahead and structure all your iteration functions so that the function argument passed in is always last, then you can use a single function for all of your collecting:

(defun collect-using (map-fn filter &rest args)
  "Collects stuff by applying the map-fn to the arguments.
Assumes that the map-fn signature has the function to be 
applied as its last argument."
  (declare (dynamic-extent filter args))
  (let ((result nil))
    (apply map-fn 
           (append 
            args
            (list 
              (lambda (thing)
                (when (or (not filter) 
                          (funcall filter thing))
                  (push thing result))))))
    (nreverse result)))

You can use the same trick for other common dataset tasks like counting:

(defun count-using (map-fn filter &rest args)
  "Counts stuff by applying the map-fn to the arguments. 
Assumes that the map-fn signature has the function to be
applied as its last argument."
  (let ((result 0))
    (apply map-fn 
           (append 
            args 
            (list (lambda (thing)
                    (when (or (not filter)
                              (funcall filter thing))
                      (incf result))))))
    
    (values result)))

I've added a filter here because that's a pretty common need.

This sort of generic, I-don't-care-about-the-types, first class functions stuff is what makes dynamic languages like Lisp so beautiful.


More brilliance from Shriram Krishnamurthi
Wednesday, June 9, 2004

Shriram Krishnamurthi is a professor at Brown University and a wonderful champion of Scheme. He has a nice talk entitled "The Swine Before Perl". It's a hoot!

(automaton see0
  (see0 (0 -> see1))
  (see1 (1 -> see0)))

is clearly ugly, evil, and an insidious plot hatched by misbegotten academics
<automaton see0
  <state name="see0">
  <trn> <from> 0 </from>
           <to> see1 </to> </trn> </state>
  <state name="see1">
  <trn> <from> 1 </from>
           <to> see0 </to> </trn> </state>
</automaton>

is a hip, cool, great new idea

The glorification of XML, garbage collection's sudden rise to blessedness, and the new-found joy of aspect-like flexibility make Lispers cranky because we've already been there and done (much) of it. But we can't act like curmudgeons! it's up to us to keep reaching out and welcoming the great unwashed! I think (some) people are starting to catch on.


You know what really makes me mad?
Wednesday, June 9, 2004

I hate it when phone access account systems make you enter your account number and then you have to tell the person you finally reach your account number all over again! That's just plain silly.


The test-first mantra
Wednesday, June 9, 2004

Like Tim Bray (and many others!), I'm a fan of test driven development. I even wrote and documented one for Lisp in January of 2001 (my first non-trivial macro). Unlike many others, however, I've never been test-infected. In spite of good intentions, I generally honor testing more in theory than in the breach. Odes to testing like Bray's leave me feeling both somewhat sullied, yet curiously inspired to try again. First, some thoughts on why testing for Lisp is different that testing for Java or C#.

  • Lisp is interactive: Lisp is so test/code friendly that it's too easy to test directly in the Listener and only in the Listener. You can't do that in Java -- you have to write code. I've found that even very slight impediments are enough to prevent me (and, I think, most other people) from doing what's right. If there is nothing else that American culture teaches it's that "easy" is far more popular that "correct".
  • Lisp is functional: True functional code is side-effect free and comes close to the dream of provable correctness. If you have functional code and you've tested it in the listener, you probably don't need to test it again -- not at the unit level in any case.
  • Lisp is language design: Large Lisp systems almost always involve building languages that express the problem domain. This is so much beyond modeling a domain with classes that I don't think people can get it without actually doing it themselves. One can test the parsing and expansion plumbing of this work but not the designs themselves. Furthermore, testing macros is tricky (you have to deal with expansions and with read, load and compile time behavior).
  • Finally, there is an echo in here. Lisp is interactive: the test system you use has to be a part of the language and of the IDE you use otherwise it just won't fit. The initial design of LIFT was a pretty straight port of the ideas in SUnit but the first redesign worked hard to make LIFT feel like Lisp. This isn't easy and there is still lots of room for improvement.

Though I seldom have the time I wish for it, I like building tools (like this one). Programming is an immense, boulder strewn path which we must navigate and every little smoothed patch of ground is a win. Tools smooth the ground. I think that there is lots of room for innovation and new ideas for testing within and for Lisp. More to follow...


ASDF-Install for OpenMCL
Tuesday, June 8, 2004

I hacked on asdf-install and OpenMCL a bit today because it seemed more complicated then necessary to get asdf-install running (of course, there's always the possibility that I was just being thick!). It took me about twice as long as it should have since I'm deeply used to Digitool's MCL and using EMACS feels like being encased in lead. I know people swear by it but it certainly isn't a discoverable interface!

In any case, OpenMCL has a nice extensible require system -- you can add additional handlers that are invoked when (require 'foo) is evaluated. The require system was written by Bryan O'Conner. This makes it pretty easy to get require to load and open ASDF systems in addition to regular Lisp files. In particular, this means that you can just

(require 'asdf-install)

and it will load for you -- just the way it does in SBCL!

Minor head banging! As I was writing this, I looked for Bryan's announcement and he already describes part of the ASDF require handler in his announcement. Turns out that he and I actually fulfill different needs and that my patch will handle asdf-install. So I'm still happy.


A New Framework for Sensor Interpretation
Tuesday, June 8, 2004

Titles like this one remind me why you should never call something new or complete or best: time moves on and it starts to sound strange!

Carver and Lesser present one of the final words on control from of the Blackboard heyday of the late 1980s and the early 1990s. The Blackboard architecture is a framework for thinking about problem solving as an opportunistic, cooperative, and flexible process. It was born on work in uncertain, hypothesis rich domains like speech understanding and signal processing. It faded for a variety of mostly sociological reasons since, truth told, there really isn't a better architecture out there. Like Lisp, Blackboards are seeing something of a resurgence of late (there is an even an open source version of the de-facto Blackboard standard being worked on) -- there is yet hope in the world.

Metaphorically speaking, a Blackboard is a shared memory space where lots of experts do their brainstorming. Each expert works on part of the problem and all contribute when and where they can. The trouble is that if all the experts try to work at once and present all their ideas, the Blackboard will become a mess and nothing useful will get done. The control problem is that of figuring which expert(s) ought to have their say next.

The RESUN framework presented in this paper says that control should be based on "gathering evidence to resolve particular sources of uncertainty." RESUN implements this via a script based planner that can return (refocus) to previous decisions (and re-decide) as new evidence comes in. These plans provide the context (goal/plan/sub-goal) for the activities of the Blackboard. The nice thing about RESUN is that the Blackboard can do more than just make hypotheses and test them, it can also do differential diagnosis (if A and B are competing hypotheses for my data, then I can get evidence for A by finding evidence against B).

The work presented here still feels quite fresh even after 13-years. The Blackboard control problem was never really solved -- doing so is probably AI complete -- but solutions like this one show both that there is unexplored potential and that much work remains.


Linked: the New Science of Networks (chapters 11 - end)
Saturday, June 5, 2004

I wrote about Linked not to long ago. Now that I've finished reading, I'll fill in the details of the last chapters and end with my exciting conclusion.

Chapter 11: Hey, the internet is a scale-free network too. This makes it resistant to random failures but vulnerable to targeted attacks. It is also a small world network and tightly connected so that failures can cascade quickly. What's more, it's very complicated so maybe one day it will become self aware -- where did that idea come from?

Chapter 12: Search is hard because the internet is a big directed graph. That links have direction mean that the internet is broken into four kinds of sub-nets: a connected core, a sub-net that is reachable from the core but that cannot get back, a sub-net that can get to core but cannot be reached from it, and a vast spume of disconnected islands.

Chapter 13: "Life" is also best understood via network inspired principles. Knowing the genome is nice but that tells us little about the network of cellular reactions, or of protein-protein interactions or of cell to cell interactions of creature to creature ones. These webs all share structural properties (they are small-world and scale-free). This has implications for treating disease, understanding ecosystems.

Chapter 14: The economy is also made up of networks. For example, the connectivity graph of fortune 1000 board members is very small world (~10,000 seats filled by ~7,700 directors!). Companies and corporations are also formed from networks (of employees) and linked by networks (of competitors, partners, suppliers and so on). This helps explain how small causes can have large effects (dot-com bubble, the South East Asian economic collapse of the 1990s).

The Last Link: The book's summary: real networks aren't random, they are scale-free; they are not constructed, they grow, change and (in some cases) adapt. Furthermore, understanding networks and their laws will help us understand almost everything else we care about.

So given this summary, is the book any good? I found the first half quite readable and mostly interesting. However, as Barabasi moves from descriptions of networks to applications of networks, his prose becomes more and more purple and, to my ears, very irksome. I'm intending to read Duncan Watt's Six Degrees soon. That should help me make a useful comparison in both the writing and the science.


Spring poetry
Friday, June 4, 2004

Gerard Manley Hopkins had a way with words:

Thrush's eggs look little low heavens, and thrush
Through the echoing timber does so rinse and wring
The ear, it strikes like lightnings to hear him sing;

The rest of it is worth reading.


Another Awesome Algorithm Archive
Friday, June 4, 2004

I don't know about you, but I appreciate alliteration. In any case, The NIST Dictionary of Algorithms and Data Structures is pretty damn cool. It even has links to code snippets and some of them are even in Lisp. Of course, that brings us back to the question of why more of them aren't in Lisp. Sigh.


LAW: A Workbench for Approximate Pattern Matching in Relational Data
Thursday, June 3, 2004

The Link Analysts Workbench (LAW) is an SRI developed tool designed to "assist the intelligence community in creating and maintaining patterns, matching those patterns against relational data and manipulating partial results." A LAW pattern includes a graph with typed vertexes and labeled edges. These graphs are built up out of a pattern language meant to be understandable to non computer-scientists but expressive enough to represent patterns of interest. Patterns also include a description of how near a match in the data needs to be (measured by graph edit distance) for it to be included in a query result set.

The graph edit distance metric used by LAW includes costs for each operation (node/edge changes, type changes, etc.). It includes an ontological distance based sub-typing (e.g., replacing a phone call with communication). It's not clear where the ontology comes from (those it's probably Cyc related) or what might happen if the analyst's ontology differs from that of the systems. Given the metric, LAW uses an A* search based anytime algorithm to find matches in the data set. These will be sub-graphs of the relational data set that are close to the pattern. They are presented to the analyst graphically so that it is clear exactly how the match was made.

LAW's architecture is web based. It uses XML to transmit pattern, hypothesis and control information and SOAP as an RPC mechanism. LAW is designed to handle multiple exploratory tools working in tandem (for example, different pattern matchers, group detection algorithms, etc.)

Extensions in the works are a more powerful pattern description language, scalability work, better semantic control and improved automation of multiple tasks -- sounds as if a blackboard architecture might be handy!

This is a well written application paper showing a variety of tools being used together to (partially) solve a difficult problem.


Some Practical Issues in Constructing Belief Networks
Tuesday, June 1, 2004

Max Henrion presents a detailed case study applying Knowledge Engineering (KE) to the construction of a Bayesian Network for predicating damage in apple orchards. This is a well written and mostly easy to follow paper (even for someone like me with little background in the field). He steps through the phases of belief net construction and provides detailed explanations of noisy-or and sensitivity analysis. It is an old paper and not deep but it provides a welcome relief from some of the high flying abstractions found in more recent work.

Noisy-or defined

What is Noisy-or anyways?

Noisy-or applies when there are several independent causes and each cause has some probability q to produce the binary effect y even in the absence of the others. The probability of y given a subset of the causes is then found by multiplying. The win here is that we need only n parameters instead of 2 to the n.

Adding an additional leak probability (the chance that the effect occurs even when none of the (known) causes is true) is a handy modeling trick.

Noisy-or can be generalized to non-boolean causes and non-boolean effects. In the first case, we need to asses the probability of the cause for each level of the non-boolean cause. In the second, we treat an n-ary discrete variable as n-1 binary ones. The final value for the effect is then the maximum of the levels produced by each influencing variable. Note that we are assuming causal independence so even if every binary variable indicates "medium" (for example), we are still at medium, not high. [Frankly, I don't think I've understood what he means on this last one. Sigh.]

Sensitivity analysis

I'm sensitive to analysis, are you?

Sensitivity analysis indicates the relative importance of one variable with respect to another. One measure is the Sensitivity Range of y with respect to x: it is the maximum possible change in probability of y as the probability of x goes from 0 to 1. Obviously, the magnitude of the sensitivity range must be less than or equal to 1. This means that the further apart two nodes are (in the Bayes Graph), the less effect they can have on each other.

Things work differently for diagnostic links. Suppose A influence B and that there is a chance of error E in this assessment (assume that A and E are independent). Then L(b,a | e) = p(b|a,e) / p(b|-a, e). I'm sorry to say that I lose track of the math about here but Henrion asserts that we can get high sensitivity factors when the prior probability of A is high and the L(b,a | e) is small. I'll try to update this when I understand it.


Applications of Graph Visualization
Friday, May 28, 2004

Papers like this one almost make me wish I was writing in a more popular language like C or Java. Lisp has been around for a long time and it had most of the buzzwords before the words ever got the buzz: rapid prototype -- we got it; dynamic -- uh huh; garbage collection, run time type information, object orientation -- yes sir; interactive environments, cool tools, GUI frameworks -- we got that too; and so on.

The trouble is that the rest of the world is moving fast and Lisp's incredible lead is not being maintained. There are Lisp aficionados who moan that everything they see today was done on the Lisp Machine years ago, only better. Personally, though, I think that they are blowing smoke. Lisp is remarkably productive but there is no way that the tiny number of Lisp programmers can keep up with the vast armies of C, C++, Java, Python and Ruby programmers out there. Those environments are getting great tools and no matter how many times I hear it, I just don't believe that EMACS integration is the be all and the end all of developer Nirvana (note that I'm not knocking EMACS, I just don't think it is nirvana!).

Anyways, this paper is about dot and lefty and how they were used to make wonderful source code querying, debugging and process managing tools. It's mostly application but presents a well engineered system and has good pointers to a lot of work on building enabling environments.


The Stony Brook Algorithm Repository
Friday, May 28, 2004

I was looking for some graph algorithms today for work in sub-graph matching and graph isomorphism. I had already heard of Donald Knuth's Stanford Graph Base and stumbled from onto the Stony Brook Algorithm Repository -- what a wonderful treasure trove (can you still say treasure trove or is that too cliche). Great gooey gobs of gorgeous algorithms for all kinds of wonderful things. The only trouble is that almost none of them are in Lisp. Somebody should do something about that.


Jon Udell talks about logging
Wednesday, May 26, 2004

I find most of what Jon Udell says interesting. Today, he's talking about logging. I agree with him that logging is an under-utilized resource in part because it's not often available at the level of an OS service. Here are three of his ideas for things that would be useful to log:

  • Warnings. If the same warning appears repeatedly (or perhaps a set of related warnings spanning several apps), it's a sign that there's a problem with the software, or with the user's understanding of the software, or both. If we don't log these warnings, though, we can't detect patterns and respond to them.
  • Settings changes. As a user, how many times have you tried to remember what settings were in place when something that's broken used to work? As a developer, how many times have you tried to get users to remember what they changed? Aren't such changes important events in the life of an application, worthy of logging?
  • Launch and exit events. These are the most basic and obvious things to record, but we don't find them in the log. If we going to move toward "software as a service," shouldn't we keep track of what's used and how often?

Like most working programmers, I spend a lot of time reading trace and debug output from my code (and yes, I still debug with format statements! That I can do so easily and quickly one of Lisp's big wins!). Usually, however, I have the feeling that the output could be so much better and analyzed so much faster if I only had the right tools. Now if just knew what those tools were...


Linked: the New Science of Networks (chapters 1 - 10)
Saturday, May 22, 2004

Linked is one of those nice "popular science" books -- easy to read but missing most of the details. I had heard that Barabasi's book was full of self-congratulatory praise but I haven't found that to be true so far. To be sure, there is a good deal of personal anecdote and his groups research is featured more than others. This, however, is par for the course for these kind of books.

Overall, Linked is fun to read and informative. I've got the sense that I'm learning about the important directions in graph / network theory. The tone is a bit preachy at times but that doesn't detract too much for the science. In any case, here is my summary of the first 10-chapters:

Chapter 1: Networks are everywhere

Chapter 2: Euler starts graph theory; Erdos & Renyi invent the theory of random graphs.

Chapter 3: Most networks are not random. They are instead "Small world" networks: you can get there from here and you can do it quickly.

Chapter 4: Watts & Strogatz's clustering theory provides one explanation for small-world networks: space matters. If a network has an initial overlay of links, then adding (relatively) few random links will make it small-world.

Chapter 5: Real networks have "hubs": some nodes have many (many, many) more links than others. We call these networks "scale-free".

Chapter 6: (Many) networks have power laws relating a variety of their properties (e.g., the number of links for each node). This can't be explained by either random graphs or by Watts-Strogatz graphs.

Chapter 7: Real networks also exist in time: they grow (and decay) and the addition of links is not random. If you add growth and "rich get richer" properties to a network, you get power laws. There has been a lot of research examining different models and their properties (e.g., death, internal links, re-linking, non-linear effects and so on).

Chapter 8: Not all nodes are the same; add fitness to the network and it gets even more interesting. Some networks can behave like a Bose-Einstein condensate -- the winner takes all. Microsoft is a (potential) example of this in the business world.

Chapter 9: Scale free networks are robust against topological failures but vulnerable to targeted attacks. You can remove nodes or links randomly without worrying, but if you take out a hub, things go bad fast.

Chapter 10: The spread of ideas, disease, etc. all follow similar models but the accuracy of these models depend on the underlying network by means of which the thing spreads. Scale-free networks do not behave like other networks. In particular, even things with a very low spreading rate may have no critical threshold (the level at which the traditional models predict that the thing won't spread).


Incremental Clustering and Dynamic Information Retrieval
Wednesday, May 19, 2004

This paper analyses one variant of the dynamic clustering problem: how to efficiently maintain a clustering of points (from a metric space) such that the diameter of the maximum cluster is minimized. Their work focuses particularly on the needs of Document indexing and retrieval (though it would easily generalize). They use Hierarchical Agglomerative Clustering (HAC) because the dendograms (trees) produced can be viewed as query refinements. The rest of the paper presents and analyses several dynamic algorithms and gives tight time and performance bounds for them. Interestingly, the authors show that it is possible to obtain results that are comparable to the best possible


Code Clarity
Wednesday, May 19, 2004

Which is clearer?

(let ((submeetings (links meeting-tree)))
    (when submeetings 
      (dolist (submeeting submeetings)
        (add-decoy-meetings 
          planner submeeting resources 
          taskforce intent organization))))

or this one?

or this one?

(dolist (submeeting (links meeting-tree))
    (add-decoy-meetings 
      planner submeeting resources 
      taskforce intent organization))

The first version makes it clear that the links in a meeting-tree are submeetings and that nothing happens unless there really are some submeetings. The second relies on the reader knowing that dolist won't evaluate its body unless the list mapped over is non-nil.

For me, the second version wins by being shorter and more idiomatic. I can imagine people liking the first version because it appears more clear but I think that this clarity is deceiving and that the second version will wear better. Just a thought.


Detecting Threatening Behavior Using Bayesian Networks
Wednesday, May 19, 2004

Laskey et. al. present Multi-Entity Bayesian Networks (MEBN) as a tool for intrusion detection. A MEBN is a Bayesian Network built from fragments (MFrags) such that the same fragment may appear multiple times representing different entities in the "real" world. MFrags have a set of resident random variables (defined by the MFrag), input random variables (which condition the resident ones) and context random variables (whose value must be true for the MFrag to apply. The variables take arguments called entities. IET has implemented MEBNs in their Quiddity*Modeler (a frame based system augmented with uncertainty).

The paper provides a brief overview of MEBNs and then spends the bulk of its time on a particular example of intrusion detection and an experiment using Quiddity*Modeler. There is a reference to another paper MEBNs for Situation Assessment which doesn't seem to be online.


TINAA is not an acronym
Wednesday, May 19, 2004

TINAA is a Lisp documentation system. Unlike JavaDoc and Albert, TINAA uses CL's introspection features rather than parsing the source code directly. The chief disadvantage of this is that you cannot document a system unless you can load it. In practice though, I don't think that this will be a big deal. Here is an example of the TINAA produced documentation for TINAA.

TINAA is meant to be extensible though it's not always clear how a good a job the first design does in this regard <smile>. It is based on the idea that a system is made up of parts and subparts and sub-subparts (all resting on the back of turtle). Tinaa can document anything as long as you tell it:

  • What kind of sub-parts it has
  • How to iterate over the sub-parts
  • How to display information about it

At this point, TINAA knows about all the standard Common Lisp stuff and about EKSL's own defsystem (Generic Load Utilities). ASDF systems are next on the to-do list (as is making TINAA an ASDF installable system itself). After that, world domination is ours.


Dynamically Constructed Bayesian Networks for Sketch Understanding
Monday, May 17, 2004

Christine Alvarado outlines an approach for sketch understanding that builds and combines fragments of Bayesian Networks (BN). Fragment types are created up front that correspond to "shape and domain patterns." These are then instantiated based on evidence from the drawing strokes a user makes (e.g., several line segments may provide evidence for an arrow shape). Each fragment can be created multiple times for different hypotheses on the data. Their current work is very much in the prototype stage but it is eerily reminiscent of what we are hoping to do on a much larger scale in Hats.


Dependency Networks for Inference, Collaborative Filtering and Data Visualization
Monday, May 17, 2004

Heckerman et. al. present Dependency Networks (DNs), a generalization of Bayesian Networks (BN). A BN is a directed acyclic graph with probability distributions associated with each node such that the joint probability distribution for all the nodes can be computed. A DN is a directed graph -- not necessarily acyclic -- with probability distributions associated with each node such that local dependencies can be computed. There is no guarantee, however, that a consistent joint probability can be found that matches the local distributions.

The chief advantage of DNs over BNs is that they can be learned significantly faster (less time and space) than BNs. They also allow for cyclic relations because they represent dependencies, not causality.

Speaking more technically, a DN for some set of variables X defines a joint distribution for X via a Gibbs Sampler. Note that Gibbs sampling is order dependent (we can get different distributions depending on the order in which we iterate over the variables in X). The nice thing is that if the learned local distributions are close to the true local distributions, then the Gibbs sampling will produce a joint distribution that is also close to the true joint distribution (Hoffman, 2000 proves this for perterbations measured with an L2 norm).

Each local distribution in a DN can be learned via any regression/classification method. "The DN can be thought of as a mechanism for combining regression/classification models via Gibbs sampling to determine a joint distribution." Heckerman et. al. use decision trees to learn each local node.

After setting the stage by introducing and explaining DNs, the paper goes on to describe using them for probabalistic inference, collaborative filtering (preference matching) and Data Visualization. DNs work well for all of these tasks because they tend to be so much cheaper computationally. Indeed, BNs give slightly better results in several of the tasks presented but at a far higher cost.

This is a nice paper: clean, well presented and mathy without being too mired in the muck.


Making deprecate part of the language
Saturday, May 15, 2004

Adding the language element deprecate to Common Lisp is easy but adding a macro only goes part way to really changing the language. What else is there?

  • The warning generated by deprecate needs to fit in with the rest of the Lisp implementation's warning and compile system,
  • There should be ways to make language elements deprecated even if you do not have the original source,
  • Any documentation system should be able to understand how deprecate fits into the Lisp implementation system.
  • The IDE and tools of the implementation need to understand the new element and deal with it appropriately (coloring, filtering, and so forth). I think editors like Eclipse can do this but I'm not sure whether it really can or how easy it is do so).

These are the sort of things -- things usually thought of as beyond or outside the bounds of a programming language -- that really matter to developers as they work.


Graph based technologies for Intelligence Analysis
Sunday, May 9, 2004

This short position paper by Thayne Cothman, Seth Greenblatt and Sherry Marcus highlights how graph theory and Social Network Analysis (SNA) are crucial to helping overburdened intelligence analysts keep up with the volume of information today's technologies produce. In particular, they higher sub-graph isomorphism (to express and find patterns) and SNA metrics (to cluster and understand the patterns). One reference that looks interesting is Dynamic Classification of Groups in a Social Network Analysis Case Study (in Proceedings of the 2004 IEEE Aerospace Conference (Big Sky, MT, Mar. 2004)).


Unsupervised Topic Discovery
Sunday, May 9, 2004

Schwartz, Sista and Leek limn their research in Topic Discovery in this white paper from 2001. They take what I believe is the standard statistical model of paper generation: a paper is written by selecting a set of topics and then using an HMM to select words from these topics (plus a General Language topic shared by all). The problem to solve is how to annotate a corpus with topics automatically -- this includes finding the topics and naming them. The authors solution is to:

  • treat each document as a query in order to find similar documents in the corpus (assuming that similar documents share at least one topic).
  • If two documents share a topic, then they probably also share some words related to the topic. This gives us document intersections
  • Clustering the document intersections (using k-means)
  • Purify the distributions using Expectation Maximization (EM).

This leaves each topic with between 100 and 200 words. Given this set of discovered topics, they then go on to create names for them out of the "most interesting" words in the topic.

The authors point out that their approach suffers from finding very similar topics, some unfocused topics and some topics that were really combinations of two (or more) separate topics. On the other hand, with only statistics to go on, the algorithm can only do so well given its data.

The paper provides a good high-level summary of their work in three pages and is worth reading, especially for someone from outside the field (like me!)


Deprecating
Thursday, May 6, 2004

One thing I've liked in Java is the ability to explicitly mark functions as deprecated. Even though it's not part of the Common Lisp standard, there are lots of ways to accomplish it. One involves writing a compiler macro

(defmacro deprecated (&body body)
  (let ((documentation nil)
        (name nil))
    (when (stringp (first body))
      (setf documentation (first body)
            body (rest body)))
    (setf name (cadar body))
    `(progn
       (define-compiler-macro ,name (&whole form &rest args)
         (declare (ignore args))                          
         (fresh-line *error-output*)
         (write-string ,(format nil "~%; warning ~a has been deprecated.~@[ ~A~]" 
                                name documentation) *error-output*)
         (terpri *error-output*)
         (values form))
       ,@body)))

Making it a compiler macro is cool because it will only bite at compile time and not bother us at run-time. On the down side, Common Lisp implementations are not required to pay any attention to compiler macros (though I think all the major ones do) and we (probably) won't get the warning if we evaluate a form in the Listener.


Reviews, reviews, reviews
Monday, May 3, 2004

Why so many paper reviews?

I've taken on two new projects recently. One involves building a black board system to analyze data using Bayesian inference. The other is some personal research on dynamic clustering that really takes time into account.

I'm doing all of this work in Lisp (of course) so I guess it's relevant to a blog that claims to be about Lisp...


Exploiting Relational Structure to Understand Publication Patterns in High-Energy Physics
Monday, May 3, 2004

This paper covers how the Amy McGovern et. al. at the KDL lab at the University of Massachusetts used the relational structure of citation data to answers questions like:

  • Can we predict why some papers receive more than others?
  • What factors contribute to author influence?
  • What factors contribute to journal publication?
  • What are the communities (schools of thought) in the high-energy physics community?

This is a clear and well written paper and its arguments are easy to follow. In addition, it limns a variety of techniques used within the KDD community for data analysis and hypothesis testing.


Unifying Data-Directed and Goal-Directed Control
Monday, May 3, 2004

This AAAI 1982 paper by Dan Corkill, Victor Lesser and Eva Hudlicka presents a nice overview of how bottom up and top down reasoning can be combined in the black board framework. The essence of the solution presented is to split each level of the black board into two parts: one data-directed and one goal-directed. When a data black board event occurs, the black board monitor adds goals to the goal black board. Because the goals are now explicit, a planner may reason about them and create inter-related plans of KSs to achieve the goals. These can be involve goal to KS mappings, goal/sub-goal hierarchies, overlapping goals and goal pre-conditions. This new architecture was used in the DVMT and preliminary results showed that the total amount of KSs invoked to solve some classes of problems was significantly reduced.


Clustering Relational Data Using Attribute and Link Information
Thursday, April 29, 2004

This paper by David Jensen, Micah Adler and Jennifer Neville describes a method to use both link and attribute information to cluster graphs of relational data. It was presented at a Text Mining and Link Analysis Workshop as part of IJCAI-2003. The method combines link weights with attribute weights to form a similarity metric and then uses one of three graph partitioning algorithms (min-cut, majorclust and Spectral) to partition the graph. All three methods rely on the assumption that linkages are dependent on similar attribute values (which is reasonable in many domains). It was tested with synthetic data using various levels of independence. All of the methods work well when the link and attribute data are highly correlated. Spectral seems to work best even in the presence of more noise. All in all, the paper covers a reasonable idea reasonably well.


Data Mining in Social Networks
Wednesday, April 28, 2004

This paper by David Jensen and Jennifer Neville provide a light weight overview of some of the issues in analyzing relational data. It includes a taxonomy of criteria by which to judge datasets and tools (e.g., network size, connectivity, relational dependence, and so forth) and highlights how concentrated linkages, degree disparity and relational autocorrelation can lead to biased feature selection and spurious correlation.

Although several technologies are mentioned, the example running throughout the paper is Jensen and Neville's own QGraph query language and Relational Probability Tree (RPT) as applied to the internet movie database. I've not read the details of RPT but it appears to be similar to decision trees expect that the input to the algorithm is a set of not necessarily isomorphic graphs instead of a set of attribute vectors. The user must decide the query to pull these graphs from the dataset and the attribute to be learned. The RPT algorithm then must build the decision nodes based on attributes of the graphs in the input set. These can consist of "regular" attributes, composites (e.g., link counts), statistical relations (e.g., (> (mean birth-year) X) and inequalities (e.g., ((> (proportion birth-year) X) Y). This is a big class of possible decisions and it's not clear how they are determined.

In summary, it's an easy read that doesn't require much background to understand. On the other hand, one is left wishing that more details were covered.


The evolution of RISC technology at IBM
Wednesday, April 28, 2004

This short paper by the father of RISC (John Cocke) traces the early history of RISC technology. It's an interesting read but lacks plot and character development (smile).


Anatomy of the Grid
Wednesday, April 7, 2004

This readable paper by Foster, Kesselman and Tuecke provides an overview of the Grid as a means of handling "coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organization". They outline the general structure of the Grid in terms of protocols and dismiss common misperceptions such as "The Grid is the Internet" or "The Grid is a distributed OS".


Virtual machines
Wednesday, April 7, 2004

I was just reading a tutorial on SQLite's virtual machine. It's not a design strategy I think of using naturally but it seems like a good one: transform a problem into simple abstract (virtual) machine and a language that compiles into it. It's interesting to me that although I constantly think in terms of language transformation and macros, I don't think about this sort of compiler. Perhaps it's more natural for people who think write in C and use Lex and YACC.

Then again, maybe because I can transform Lisp to Lisp, I don't need this other strategy. On the other hand, maybe this strategy would help structure things better than organically grown - and difficult to prune - macros.


When not to apply yourself
Saturday, April 3, 2004

Apply is cool but don't use it unless you need to. I recently found this definition of #'mean:

(defun mean (list) (/ (apply #'+ list)   
  (length list)))

Which is certainly clean and succinct. Think about what happens, however, when this is called with a really big list (e.g., 100,000 elements): apply is called with 100,000 arguments. All of these need to be pushed onto the stack and, generally speaking, there won't be room. Note that Lisp has two implementation dependent constants (lambda-parameters-limit and call-arguments-limit) which proclaim how many arguments the implementation feels it can support when defining and calling functions. To get around this, you can use reduce:

(defun mean (list) (float (/ (reduce #'+ list) (length list))))

or even an a new fangled iteration construct:

(defun mean (list)
  (let ((sum 0) (count 0))
    (loop for x in list do
          (incf sum x)
          (incf count))
    (float (/ sum count))))

The #'reduce method is slightly slower in this case because we also need to get the length of the list (two passes) rather than computing both in one pass. I've added the call to #'float because we probably don't want to be trafficking in ratios.


Home | About | Quotes | Recent | Archives

Copyright -- Gary Warren King, 2004 - 2006