ClojureScript Demo : Convex Hull

Update : bug-fix when hull was being incorrectly calculated due to there being duplicate points generated in the random set.

ClojureScript looks like a solid approach to building applications that target JavaScript VMs. It’s built on top of Google’s Closure Compiler/Library which is very intruiging and is the best approach that they could have taken (now that I’ve a played with it a little). Being new to both Closure and ClojureScript I was curious about what it might feel like to build an application using these tools. I’ve mostly despised programming in JavaScript for browsers no matter what hyped-up library is available (this includes JQuery which is the best of a bad bunch in my opinion). So I decided to write a ClojureScript application that runs in the browser based on a previous Clojure implementation of a Convex Hull algorithm with heuristics.

This was a piece of cake. I really like the pre-compiled approach that relies on the Closure compiler/library. It just feels like you’re writing a regular application instead of trying to force the browser to do the ‘correct’ thing with run-time code and the DOM. There are a few differences that I ran into, a few functions don’t yet exist and using macros is not as clean as I’d expect. Macros have to be implemented in Clojure and then referenced from ClojureScript. No big deal really.

Here’s the demo

Here’s all the UI code. Not much really at < 100 lines. Very cool.

(def edge-stroke (graphics/Stroke. 1 "#444"))
(def blue-edge-stroke (graphics/Stroke. 1 "#66b"))
(def green-edge-stroke (graphics/Stroke. 1 "#0f0"))
(def white-fill (graphics/SolidFill. "#fff"))
(def blue-fill (graphics/SolidFill. "#66b"))
(def green-fill (graphics/SolidFill. "#0f0"))
(def trans-fill (graphics/SolidFill. "#0f0" 0.001))

(def g
  (doto (graphics/createGraphics "440" "440")
    (.render (dom/getElement "graph"))))

(defn draw-graph 
    (let [canvas-size (. g (getPixelSize))]
     (.drawRect g 0 0 
            (.width canvas-size) (.height canvas-size) 
            edge-stroke white-fill)))

(defn scale-coord
  (+ 20 (* 4 coord)))

(defn draw-points
    [points stroke fill]
    (doseq  [[x y :as pt] points]
        (.drawEllipse g (scale-coord x) (scale-coord y) 
               3 3 stroke fill)))

(defn draw-convex-hull
    [points stroke fill]
    (let [path (graphics/Path.)
      [xs ys :as start] (first points)]
     (.moveTo path (scale-coord xs) (scale-coord ys))
     (doall (map (fn [[x y :as pt]]
             (.lineTo path (scale-coord x) (scale-coord y)))
             (rest points)))
     (.lineTo path (scale-coord xs) (scale-coord ys))
    (.drawPath g path stroke fill)))

(defn print-points
    [points el]
    (doseq [pair points]
       (dom/append el
               (str " [" (first pair) " " (second pair) "]"))))

(defn ^:export rundemo
  (let [cnt 1E2
        rpts (apply 
             (map (fn [n] 
                  [(rand-int (inc cnt))
                   (rand-int (inc cnt))])
              (range 1 cnt [])))
       text-input-title (dom/getElement "text-input-title")
       text-input (dom/getElement "text-input")
       text-results-status (dom/getElement "text-results-status")
       text-results (dom/getElement "text-results")]
       ;; draw all points
    (str "Random generation of " cnt " points...")) 
       (draw-points rpts blue-edge-stroke blue-fill)
       (print-points rpts text-input)
       ;; calc hull
    (str "Calculating convex hull ...")) 
       (let [r1 (randomset rpts false)
         r2 (randomset rpts true)]
         (dom/append text-results-status (str " done.n")) 
         ;; update the results
         (print-points r2 text-results)
          (str "Convex hull has " (count r1) " points.n"))
         ;; draw hull points
         (draw-points r1 green-edge-stroke green-fill)
         ;; draw hull
         (draw-convex-hull r1 green-edge-stroke trans-fill)
         ;; return the results
         [rpts r2])))

;; Auto-update
(defn ^:export poll
  (let [timer (goog.Timer. 15000)]
    (do (rundemo)
        (. timer (start))
        (events/listen timer goog.Timer/TICK rundemo))))

The future of client-side programming just got way better thanks to Rich and team !
All code is here.

Purely Functional Data Structures & Algorithms : Fast Fourier Transform in Qi

In this second post in this series we look at an implementation of the always useful Fast Fourier Transform.

(FFT) An algorithm for computing the Fourier transform of a set of discrete data values. Given a finite set of data points, for example a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also solves the essentially identical inverse problem of reconstructing a signal from the frequency data.

The FFT is a mainstay of numerical analysis. Gilbert Strang described it as “the most important algorithm of our generation”. The FFT also provides the asymptotically fastest known algorithm for multiplying two polynomials.

Our implementation comes in at just under 100 lines of code

(declare atan [number --> number])
(define atan X -> (ATAN X))

(declare cos [number --> number])
(define cos X -> (COS X))

(declare sin [number --> number])
(define sin X -> (SIN X))

(tc +)

 Complex numbers 

(datatype complex
    Real : number; Imag : number;
    [Real Imag] : complex;)

(define complex-mult
  {complex --> complex --> complex}
  [R1 I1] [R2 I2] -> [(- (* R1 R2) (* I1 I2))
                      (+ (* R1 I2) (* I1 R2))])

(define complex-add
  {complex --> complex --> complex}
  [R1 I1] [R2 I2] -> [(+ R1 R2) (+ I1 I2)])

(define complex-diff
  {complex --> complex --> complex}
  [R1 I1] [R2 I2] -> [(- R1 R2) (- I1 I2)])

 Fast Fourier Transform 

(define butterfly-list
    {((list complex) * ((list complex) * (list complex)))
     --> ((list complex) * ((list complex) * (list complex)))}
    (@p X (@p X1 X2)) -> (if (empty? X)
                             (@p X (@p (reverse X1) (reverse X2)))
                              (@p (tail (tail X))
                                  (@p (cons (head X) X1)
                                      (cons (head (tail X)) X2))))))

(define calc-results
    {(((list complex) * (list (list complex))) * 
                        ((list complex) * (list complex)))
     --> (((list complex) * (list (list complex))) * 
                            ((list complex) * (list complex)))}
    (@p (@p [W WN] [YA YB]) (@p Y1 Y2)) ->
    (if (and (empty? Y1) (empty? Y2))
        (@p (@p [W WN] [(reverse YA) (reverse YB)]) (@p Y1 Y2))
         (@p (@p [(complex-mult W WN) WN]
                 [(cons (complex-add  (head Y1) (complex-mult W (head Y2))) YA)
                 (cons (complex-diff (head Y1) (complex-mult W (head Y2))) YB)])
             (@p (tail Y1) (tail Y2))))))

(define fft
    {number --> complex --> (list complex) --> (list complex)
     --> (list complex)}
    1 WN X Y -> [(head X)]
    2 WN X Y -> [(complex-add  (head X) (head (tail X)))
                 (complex-diff (head X) (head (tail X)))]
    N WN X Y -> (let M   (round (/ N 2))
                     Inp (butterfly-list (@p X (@p [] [])))
                     X1  (fst (snd Inp))
                     X2  (snd (snd Inp))
                     Y1  (fft M (complex-mult WN WN) X1 [])
                     Y2  (fft M (complex-mult WN WN) X2 [])
                     W   [1 0]
                     Res (calc-results (@p (@p [W WN] [[] []]) (@p Y1 Y2)))
                     (append (head (snd (fst Res)))
                             (head (tail (snd (fst Res)))))))

(define dotimes-fft
    {number --> number --> complex --> (list complex) --> (list complex)
    --> (list complex)}
    Iterations Size W Input Res ->
    (if ( number --> (list complex) 
     --> (list complex)}
    Iterations Size Input -> (let Pi    (* 4 (atan 1))
                                  Theta (* 2 (/ Pi Size))
                                  W     [(cos Theta) (* -1 (sin Theta))]
                                  (dotimes-fft Iterations Size W Input [])))

Let’s give it a spin …

 Square wave test 

(26-) (time (run-fft 100000 16 
             [[0 0] [1 0] [0 0] [1 0] [0 0] [1 0] [0 0] [1 0]
              [0 0] [1 0] [0 0] [1 0] [0 0] [1 0] [0 0] [1 0]]))

Evaluation took:
  2.999 seconds of real time
  2.942718 seconds of total run time (2.798716 user, 0.144002 system)
  [ Run times consist of 0.371 seconds GC time, and 2.572 seconds non-GC time. ]
  98.13% CPU
  6,282,874,678 processor cycles
  1,641,619,888 bytes consed

[[8 0] [0.0 0.0] [0.0 0.0] [0.0 0.0] 
 [0.0 0.0] [0.0 0.0] [0.0 0.0] [0.0 0.0]
 [-8 0] [0.0 0.0] [0.0 0.0] [0.0 0.0] 
 [0.0 0.0] [0.0 0.0] [0.0 0.0] [0.0 0.0]] : (list complex)

All Qi code in this post is here.

Purely Functional Data Structures & Algorithms : Red-Black Trees in Qi

Update 2011/06/28 : Source has been modified to compile with Shen

This is the first in a series of posts that will demonstrate the implementation of many well-known(and less known) data structures and algorithms using a purely functional approach.
We will use Qi as our implementation language for a number of reasons :

    It’s a Lisp : macros, EVAL, hash-tables, property-lists, meta-programming etc.
    Pattern matching.
    Optional static type checking.
    A Turing-complete type system !

In this first post we look at an implementation of the well-known Red-Black tree abstract data type in Qi.

A red–black tree is a type of self-balancing binary search tree, a data structure used in computer science, typically to implement associative arrays. The original structure was invented in 1972 by Rudolf Bayer and named “symmetric binary B-tree,” but acquired its modern name in a paper in 1978 by Leonidas J. Guibas and Robert Sedgewick. It is complex, but has good worst-case running time for its operations and is efficient in practice: it can search, insert, and delete in O(log n) time, where n is total number of elements in the tree. Put very simply, a red–black tree is a binary search tree that inserts and removes intelligently, to ensure the tree is reasonably balanced.

Our implementation comes in at 57 lines of code (with the balance function at only 7 lines)

(tc +)

(datatype tree-node
    Key : number; Val : B;
    [Key Val] : tree-node;)

(datatype color
    if (element? Color [red black])
    Color : color;)

(datatype tree
    if (empty? Tree)
    Tree : tree;

    Color : color; LTree : tree; TreeNode : tree-node; RTree : tree;
    [Color LTree TreeNode RTree] : tree;)

(define node-key
    {tree-node --> number}
    [Key Val] -> Key)

(define make-tree-black
    {tree --> tree}
    [Color A X B] -> [black A X B])

(define member
    {tree-node --> tree --> boolean}
    X NIL -> false
    X [Color A Y B] -> (if (< (node-key X) (node-key Y))
         (member X A)
         (if (< (node-key Y) (node-key X))
             (member X B)

(define balance
    {tree --> tree}
    [black [red [red A X B] Y C] Z D] -> [red [black A X B] Y [black C Z D]]
    [black [red A X [red B Y C]] Z D] -> [red [black A X B] Y [black C Z D]]
    [black A X [red [red B Y C] Z D]] -> [red [black A X B] Y [black C Z D]]
    [black A X [red B Y [red C Z D]]] -> [red [black A X B] Y [black C Z D]]
    S -> S)

(define insert-
    {tree-node --> tree --> tree}
    X [] -> [red [] X []]
    X [Color A Y B] -> (if (< (node-key X) (node-key Y))
                           (balance [Color (insert- X A) Y B])
                           (if (< (node-key Y) (node-key X))
                               (balance [Color A Y (insert- X B)])
                               [Color A Y B])))

(define insert
  {tree-node --> tree --> tree}
  X S -> (make-tree-black (insert- X S)))

This is a reasonably performant implementation (we haven’t even tried to optimize it yet).

(19-) (run-tests NIL)
tree: [black
       [red [black [red [] [1 1] []] [2 2] [red [] [5 5] []]] [7 7]
        [black [red [] [8 8] []] [11 11] []]]
       [14 14] [black [] [15 15] []]]
12 is a member ? false
8 is a member ? true

Creating tree with 100000 elements ...
Evaluation took:
  0.578 seconds of real time
  0.562833 seconds of total run time (0.491572 user, 0.071261 system)
  [ Run times consist of 0.160 seconds GC time, and 0.403 seconds non-GC time. ]
  97.40% CPU
  1,210,617,335 processor cycles
  168,551,696 bytes consed

Performing lookups in tree with 100000 elements ...
666 in tree ? true
Evaluation took:
  0.000 seconds of real time
  0.000044 seconds of total run time (0.000035 user, 0.000009 system)
  0.00% CPU
  86,110 processor cycles
  0 bytes consed

-1 in tree ?
Evaluation took:
  0.000 seconds of real time
  0.000024 seconds of total run time (0.000021 user, 0.000003 system)
  100.00% CPU
  46,368 processor cycles
  0 bytes consed

A comparable implementation in Java/C++ will usually run a few hundred lines of code.
All Qi code in this post is here.

Happy PI day ! (in QiII)

Qi is the future of Lisp.
It is Lisp with many great features such as pattern-matching, a turing complete static type system (even more powerful than Haskell’s type system) and many others.

So in the spirit of PI day, here’s an implementation that calculates PI using Machin’s formula.

(define ceiling 
  X -> (CEILING X))
(declare ceiling [number --> number])

(define expt 
  X Y -> (EXPT X Y))
(declare expt [number --> number --> number])

(define floor 
  X Y -> (FLOOR X Y))
(declare floor [number --> number --> number])

(define log
  X Y -> (LOG X Y))
(declare log [number --> number --> number])

(tc +)

(define arccot-
  {number --> number --> number --> number --> number --> number} 
  X N XPOWER    0 _ -> 0
  X N XPOWER TERM 1 -> (+ (arccot- X (+ N 2) (floor XPOWER X) 
                                     (floor XPOWER N) 0) (floor XPOWER N))
  X N XPOWER TERM 0 -> (- (arccot- X (+ N 2) (floor XPOWER X) 
                                      (floor XPOWER N) 1) (floor XPOWER N)))

(define arccot
  {number --> number --> number}
  X UNITY -> (let XPOWER (floor (/ UNITY X) 1)
                  (arccot- (* X X) 1 XPOWER (floor XPOWER 1) 1)))

(define machin-pi
  {number --> number} 
  DIGITS -> (let GUARD (+ 10 (ceiling (log DIGITS 10)))
                 UNITY (expt 10 (+ DIGITS GUARD))
                 (floor (* 4 (- (* 4 (arccot 5 UNITY)) 
                                (arccot 239 UNITY))) (expt 10 GUARD))))

And the output …

(time (machin-pi 10000))

Evaluation took:
  0.379 seconds of real time
  0.372112 seconds of total run time (0.269730 user, 0.102382 system)
  [ Run times consist of 0.055 seconds GC time, and 0.318 seconds non-GC time. ]
  98.15% CPU
  903,769,308 processor cycles
  78,698,160 bytes consed

314159265358979323846264338327950 ... 1655256375678 : number

Compared with Common Lisp, Haskell and Clojure.

Philosophy and Lisp

Programming language wars don’t have to be religious based wars. Programming languages should be rooted in philosophy. The more a programming language is rooted in sound philosophy the more value it has.

Over the years, many of the posts on this blog have been regarding some programming language, algorithm or technology. Some posts have highlighted why Lisp is the most powerful and useful programming language paradigm available to man at this point in the history of computer science.

Explicitly pointing out examples of Lisp code is always insightful and important (at least to those open to evidence and reason).

Still there are people who cannot(or will not?) grasp just why Lisp is, has been(for the past half-century) and will be so important to the development and growth of computer science. For example, some people, in spite of having read Paul Graham’s clear essays on Lisp (which make it really easy to grasp why Lisp is important), still often seem to parrot incoherent illogical arguments and myths against Lisp.

My goal with many of the blog posts here have been my attempt to bring some understanding to folks interested in Lisp and computer science related topics that are based on integrity and therefore are of real value to those that pursue them.

Within computer science, academia and industry there are too many disparate choices presented to the various stake holders from the cubicle dwellers all the way up to the CEOs and Professors. The elephant in the room with all these choices(and what most of them have in common) is that they are lacking in integrity and value. Profit, control, ignorance, altruism, stupidity, inexperience, grant money, incapability, kick backs, bonuses, salaries, titles, fear and many other reasons explain why integrity and value are lacking.

The same has happened with all of the other sciences too and from a philosophical stand-point the causes are very much similar.

At this point some may ask why philosophy is even important to computer science and let alone a programming language called Lisp. The kind of person that usually asks this question is usually the kind of person that has never understood why philosophy itself is so important. Well, just how important is philosophy ? The short answer is that, after life itself, philosophy is the second most important thing to a human being.

It’s critical that after stating the high importance of philosophy that I quickly define what I mean by Philosopy. By philosophy I mean the study of the fundamental nature of knowledge, reality and existence using the tools of observation, evidence, empiricism, logic and reason. This is the classical philosophy of Aristotle and Socrates which is rational absolutism. It is NOT the charlatan ‘philosophy’ of mysticism, positivism, relativism, perspectivism, nihilism and altruism of Plato, Marx, Imannuel Kant, Kierkegaard, Hegel and so many others whose theories have tragically played out in human history and some of which unfortunately are still continually adhered to right up until now. They are more correctly called for what they are : ideologies or religions. Religion is irrational absolutism. The philosophy I am talking about is made distinct in that it is rational absolutism. It is therefore not for bar-room outbursts or musings between tenured professors in dusty old buildings. It is not the salesy popular positive-thinking conventions, the caffeine-overdose incoherent babbling at church or AA gatherings. It is not the foggy positive upbeat tangled ramblings of relativism at burning man. It is a study that has practical applications right from the start.

Without philosophy you would not be reading this blog post. You would not have a computer, there would be no internet. Society would not have produced books, hygiene would not exist, the enlightenment would never have happened. Mathematics and the sciences would never have advanced to where they are today and we would not be benefiting from them if it weren’t for philosophy. From the dysfunctional quirks to the atrocities perpetuated by conflict around this planet, which we all witness each day in society, the root cause is the problem of ideologies and religions usurping the rightful place of philosophy. Philosophy is a matter of life and death. Philosophy is as critical to ethics and morality as it is to mathematics and science. This has been conclusively proved from first principles and so I will not do it here. The human race has advanced in technology further than anyone could have imagined and yet we still resort to coercion and violence at all levels of society. This is because we have not based our ethics and morality on philosophy. Instead we have given these responsibilities of moral and ethical definition to authority figures : the government, industry, academia, the church and well-intentioned but dishonest and flawed parental coercive attempts that do incalculable damage to children and then play out in our societies through their adult life in crime and violence or if we’re lucky it’s merely benign stupidty and arrested personal development.

Well, that’s quite the detour but it’s important to highlight the importance of Philosophy.
Here I put forward that Lisp’s outstanding importance to computer science compared with other programming languages is based on it’s solid philosophical foundation. This is quite simple to prove and I will do so in a few paragraphs.
Lisp is based on Lambda Calculus. Lambda Calculus is a formal system for function definition, function application and recursion. Lisp’s contribution to programming language theory is unfortunately, for the most part, unrecognized by the majority of programmers today. For example: Lisp and typed lambda calculii serve as the foundation for modern type systems. On the other end there is no equivalent of Lisp’s concept of macros that exists in any other programming language even up until today. If there were then that programming language would be a Lisp implementation.

Let’s look further down at Lisp. I stated that Lisp is based on the formal system of lambda calculus. The formal system of lambda calculus is based on functions, logic and predicates, recursion and a number of other important concepts that are fundamental concepts in mathematics. It would follow that the greater the fidelity a programming language has to these mathematical concepts and the more it builds upon them then the more powerful the programming language will be. History provides the evidence in that there is no other programming language that has done this better than Lisp.

We could go even deeper and ask why these fundamental mathematical concepts are so crucial. The answer will then take us into philosophy upon which mathematics is based upon. Sound philosophy demanded that these mathematical concepts be tested by evidence, logic and rigor from some very basic premises that were built up to more complex and powerful structures of thought which were proved to be true. Metaphorically: mathematics and the sciences are trees which can only grow in the soil of philosophy. The reasons are plain as to why religion, superstition and mysticism are not responsible for putting man on the moon or leading to the discovery of DNA.

The scientific/mathematical side of Lisp is just half of the explanation though. The other half of Lisp is the ethical and moral side. Stay with me. Most programmers hardly ever associate a programming language with ethics and morality but they do play a role in the design and use of a language. This is because human beings must use these programming languages. They must fill their minds with the concepts and limitations that these programming languages require for their use and application. When a language designer decrees(when he should instead be deducting) that some of the features that were available to him in the design are too powerful for the users of his language then he is in the realm of morality and ethics and as such is subject to valid moral and ethical scrutiny, which are in turn based on rational and evidence based philopsophy. You may be a computer scientist but you are still to be held morally and ethically responsible for your creation and what it subjects the minds of your users to. On a daily basis and for the masses of programmers, their language is unfortunately seen as a religious preference. It is an ideology forced upon them by their indoctrination from their peer group in academia or industry. Many are not even aware that their every day programming language even matters. Most just don’t even care. They are unaware of how the languages that they use effects their ability to think and solve problems.

Lisp’s design was such that it considered the user of the language equally important as the designer of the language. This shows in that Lisp has compile time and run time macros which effectively allow the user to change the language itself at it’s most basic level if they so desire. Contrast this design with the dictatorial designs of popular languages in industry. On the other hand Common Lisp’s design takes the freedom of the user even more seriously being a multi-paradigm Lisp.

In conclusion I don’t want to suggest that everyone should be using Lisp against their will. That would run counter to the philosophy of Lisp. Lisp is not a religion in the way other programming languages are seen. The myth of the newbie-eating Lisp hacker is just that, a myth. Lisp is embraced by the minority just as the sciences are. It has been shown that Lisp is based on sound philosophical principles and that these have resulted in it being the most successful(not popular) programming language in history. It’s contribution to programming language theory is remarkable. It has also imparted enjoyment, programming power and cognitive freedom to it’s users like no other programming language has.

Keep Lisping !

64-bit Snow Leopard SDL binaries

I picked up the new 11″ macbook air and have been using it for the last couple of weeks as my primary dev machine. It’s been great. This was after spending over a year on Linux. Getting SDL with lispbuilder setup and running on OS X these days is quite a feat compared with the same on Linux. It seems that there are no offical SDL binaries for OS X that actually work. Macports is great but the SDL ports only work if whoever you are distributing your application to has the macports SDL ports installed. Even then I could barely get SDL(and the extensions such as SDL_gfx, SDL_mixer, SDL_ttf) all set up and working with lispbuilder-sdl. SBCL would always spew something about not being able to find native libs. It seems that when it comes to the macports SDL port versions, well it’s a mix of i386/x86_64 between the SDL ports. A recipe for a head-ache to say the least. Yes I tried building everything using the universal variant. It just doesn’t work.

So I had to get the source for everything and build all the binaries as x86_64 by hand so that the frameworks were actually distributable. I now know way more than I wanted to about dylibs and framework magic incantations. These frameworks are distributable but there’s one catch, they have to be placed in /Library/Frameworks. Not exactly perfect but much better than what’s out there on the web. If anyone has been able to get lispbuilder-sdl working with these SDL frameworks in a custom directory, then please drop me a line. Otherwise, a disk image containing the binaries can be downloaded here.

SDL Frameworks x86_64 (10.6.x).dmg (md5 70a68450c648de742affcb13923a61f4)

If you are going to use lispbuilder, save yourself lots of time and use Xach’s quicklisp to install it.

Happy holiday hacking !!!

Anaphoric(aka ‘Un-hygenic’) macros in CL

As an example let’s look at an algorithm that’s fairly common : breadth first traversal of a binary tree. Also called level-order traversal.

“In graph theory, breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal.”

First let’s define a simple node struct and create a tree :

(defstruct (btnode)
  (val nil)
  (left nil)
  (right nil))

(defun print-node (n) (format t "~A " (btnode-val n)))

(defvar root #S(BTNODE
                  :VAL 5
                  :LEFT #S(BTNODE
                           :VAL 3
                           :LEFT #S(BTNODE
                                    :VAL 2
                                    :LEFT #S(BTNODE :VAL 1 :LEFT NIL :RIGHT NIL)
                                    :RIGHT NIL)
                           :RIGHT #S(BTNODE :VAL 4 :LEFT NIL :RIGHT NIL))
                  :RIGHT #S(BTNODE
                            :VAL 7
                            :LEFT #S(BTNODE :VAL 6 :LEFT NIL :RIGHT NIL)
                            :RIGHT #S(BTNODE
                                      :VAL 8
                                      :LEFT NIL
                                      :RIGHT #S(BTNODE :VAL 9 :LEFT NIL :RIGHT NIL)))))

Let’s add some twists : let’s say we want to be able to traverse horizontally left to right or right to left and vertically from top to bottom or from bottom to top. Also, what if we wanted to do different operations on the nodes and come up with a way to filter certain nodes from the results.
Anaphoric macros make this possible in a few lines of code :

(defmacro with-breadth-first-traversal (&body body)
  `(macrolet ((bft-init ((&optional &key test) &body body)
             `(let* ((res   (list node))
                     (queue (list node))
                     (btot  (eql vorder 'bottom-to-top))
                     (rtol  (eql horder 'right-to-left))
                     (rtol  (if btot (not rtol) rtol)))
           (bft-loop-init (&body body)
              `(let* ((node     (car queue))
                      (lcnode   (btnode-left  node))
                      (rcnode   (btnode-right node))
                      (children (when rcnode
                                  (cons rcnode nil)))
                      (children (if lcnode
                                    (cons lcnode children) children))
                      (children (if rtol (reverse children) children))
                      (nqueue   (append (rest queue) children))
                      (nres     (append res children)))

;; Define callable variations of breadth first traversal functions

  (defun traverse-tree-by-level
      (node fun &optional horder vorder)
    (bft-init ()
          (loop while queue do
                  (setf queue nqueue)
                  (setf res   nres)))
          (mapcar fun (if btot (reverse res) res))))

  (defmacro traverse-tree-by-level-recursive-m
      (node fun &optional horder vorder &key test)
    (declare (ignore node fun horder vorder))
    `(bft-init (:test ,test)
           (labels ((traverse (res queue)
                      (if (not queue)
                          (if ,test (delete-if-not ,test res) res)
                          (bft-loop-init (traverse nres nqueue)))))
             (let ((res (traverse res queue)))
               (mapcar fun (if btot (reverse res) res))))))

  (defun traverse-tree-by-level-recursive
      (node fun &optional horder vorder)
    (traverse-tree-by-level-recursive-m node fun horder vorder))


Here we have used anaphoric macros to implement two ways to perform BFS. The first uses a while loop and the second uses recursion. We’ve also allowed for calling code to be able to pass in a :test key lambda that will filter nodes from our results. This is typical of the kind of flexibility that can be derived when implementing algorithms using anaphoric macros.
Here are the results of printing out the traversals :

(defun run-bfs-test-variations (variations)
  (let ((orders (mapcar #'(lambda (x)
                            (mapcar #'(lambda (y) (cons x y))
                                    '(left-to-right right-to-left)))
                        '(top-to-bottom bottom-to-top))))
    (mapcar #'(lambda (test)
                (format t (car test))
                (map 'list #'(lambda (fnames)
                               (format t "~13A ~A : " (car fnames) (cdr fnames))
                               (apply (cadr test) (list root #'print-node
                                                        (cdr fnames) (car fnames)))
                               (format t "~%"))
                     (apply #'append orders))
                (format t "~%"))
  '(("level order function : ~%"               traverse-tree-by-level)
    ("level order function recursive : ~%"     traverse-tree-by-level-recursive)))

;; level order function :
;; TOP-TO-BOTTOM LEFT-TO-RIGHT : 5 3 7 2 4 6 8 1 9
;; TOP-TO-BOTTOM RIGHT-TO-LEFT : 5 7 3 8 6 4 2 9 1
;; BOTTOM-TO-TOP LEFT-TO-RIGHT : 1 9 2 4 6 8 3 7 5
;; BOTTOM-TO-TOP RIGHT-TO-LEFT : 9 1 8 6 4 2 7 3 5

;; level order function recursive :
;; TOP-TO-BOTTOM LEFT-TO-RIGHT : 5 3 7 2 4 6 8 1 9
;; TOP-TO-BOTTOM RIGHT-TO-LEFT : 5 7 3 8 6 4 2 9 1
;; BOTTOM-TO-TOP LEFT-TO-RIGHT : 1 9 2 4 6 8 3 7 5
;; BOTTOM-TO-TOP RIGHT-TO-LEFT : 9 1 8 6 4 2 7 3 5

So we could add some more functions with very little effort to do other BFS traversal variations.
Let’s implement a function to only show nodes that have an odd numeric value.
Or we could decide that we want to show only the leaf nodes in the tree.


  (defun traverse-tree-by-level-only-odd
      (node fun &optional horder vorder)
     node fun horder vorder
     :test #'(lambda (node) (and node (oddp (btnode-val node))))))

  (defun traverse-tree-by-level-only-leaves
      (node fun &optional horder vorder)
     node fun horder vorder
     :test #'(lambda (node)
               (and node (not (or (btnode-left node)
                                  (btnode-right node)))))))

The results :

   '(("level order function (odd only) : ~%"    traverse-tree-by-level-only-odd)
     ("level order function (leaves only) : ~%" traverse-tree-by-level-only-leaves)))

;; level order function (odd only) :

;; level order function (leaves only) :

We’ve gained significantly with this approach for only a few lines of code.
How long would a version in other languages be with the same functionality and flexibility ?

Tree traversal code.

Ruby ‘compiler’ in CL

The implementers of existing Ruby VMs have gone the way of C, C++ and Java. There is another possibility. Why not implement Ruby in Common Lisp ?

Ok, let’s take the first shot and implement a subset of the Ruby language. Just enough to run some simple contrived numeric benchmarks. The following pseudo BNF describing a minimal subset of Ruby should do (taken from the rubydocs here).



STMT            : CALL

EXPR            | ARG

CALL            : FUNCTION


ARG             : LHS `=' ARG
| ARG `..' ARG
| ARG `+' ARG
| ARG `-' ARG
| ARG `*' ARG


LHS             : VARIABLE


ARGS            : ARG (`,' ARG)*

ARGDECL         : `(' ARGLIST `)'


| nil

LITERAL         : numeric

TERM            : `;'
| `n'




STRING          : `"' any_char* `"'

IDENTIFIER is the sequence of characters in the pattern of /[a-zA-Z_][a-zA-Z0-9_]*/.

Let’s write two functions in our new Ruby subset language. First a function to calculate fibonacci numbers.

def fib(n)
  curr = 0
  succ = 1
  presucc = nil

  for i in 1..n do
    presucc = succ
    succ = curr + succ
    curr = presucc



Then run it on ruby1.9 :

jgrant@pluto:~/cl-ruby$ ruby1.9 fib_it.rb
195328212870775773163201494759...<208928 digits>...719893411568996526838242546875
50.352559375 seconds of real time

Next a function to calculate factorials

def fact (n)
  tot = 1

  for i in 1..n do
    tot = tot * i


res = fact(100000)

Then run it on ruby1.9 :

jgrant@pluto:~/cl-ruby$ ruby1.9 fact.rb
282422940796034787429342157802...<456514 digits>...000000000000000000000000000000
20.71802105 seconds of real time

Now let’s compare by running the same code on our Ruby subset implemented in Lisp

(defun test-fib()
  (let* ((fun (compile-ruby
               "def fib (n)
                  curr = 0
                  succ = 1
                  presucc = nil

                  for i in 1..n do
                    presucc = succ
                    succ = curr + succ
                    curr = presucc

         (funcall (compile-ruby
         (res (progn (run-ruby fun)
                     (format t "~A~%~A" fun funcall)
                     (time (run-ruby funcall))))
         (resstr (format nil "~A" res)))
    (format t "~A...<~A digits>...~A"
            (subseq resstr 0 30)
            (- (length resstr) 60)
            (subseq resstr (- (length resstr) 30))))

* (test-fib)
Evaluation took:
  31.734 seconds of real time
  31.810000 seconds of total run time (30.070000 user, 1.740000 system)
  [ Run times consist of 1.260 seconds GC time, and 30.550 seconds non-GC time. ]
  100.24% CPU
  3 forms interpreted
  50,639,978,358 processor cycles
  43,412,830,352 bytes consed

195328212870775773163201494759...<208928 digits>...719893411568996526838242546875

And factorial

(defun test-fact()
  (let* ((fun (compile-ruby
               "def fact (n)
                  tot = 1

                  for i in 1..n do
                    tot = tot * i

         (funcall (compile-ruby
         (res (progn (run-ruby fun)
                     (format t "~A~%~A" fun funcall)
                     (time (run-ruby funcall))))
         (resstr (format nil "~A" res)))
    (format t "~A...<~A digits>...~A"
            (subseq resstr 0 30)
            (- (length resstr) 60)
            (subseq resstr (- (length resstr) 30))))

*  (test-fact)
Evaluation took:
  5.772 seconds of real time
  5.790000 seconds of total run time (5.470000 user, 0.320000 system)
  [ Run times consist of 0.230 seconds GC time, and 5.560 seconds non-GC time. ]
  100.31% CPU
  3 forms interpreted
  9,210,473,724 processor cycles
  9,030,853,008 bytes consed

282422940796034787429342157802...<456514 digits>...000000000000000000000000000000

We’re seeing ~ 1.5 – 3.5 X improvement with our toy CL Ruby ‘compiler’ without even trying to generate performant code.

Code is here

Google to acquire ITA ?

Update 2010-06-30 : So just over a day after I posted this entry Google announced that they have acquired ITA. Announcement

There was buzz back in April about Google possibly acquiring ITA Software.

A few days ago Dan posted that these were just rumors based on a single Bloomberg article.

Well it seems that it may be more than just a rumor : The Wall Street journal now has an article regarding the potential acquisition.
The silence seems quite confirming : “Google declined to comment. ITA didn’t respond to calls seeking comment.”.

Also fairly interesting is that ITA uses a very similar combination of languages to what Google does : Python, C++, Java. Would this factor in Google’s interest ?

Hopefully, if this does happen, the ITA Lisp hackers get a substantial financial reward for all their innovative work. Fingers crossed …

Summer 2010 reading

“Let over Lambda – 50 Years of Lisp” by Doug Hoyte

This one had been sitting on my bookshelf for almost a year.

“Let Over Lambda is one of the most hardcore computer programming books out there. Starting with the fundamentals, it describes the most advanced features of the most advanced language: COMMON LISP. The point of this book is to expose you to ideas that you might otherwise never be exposed to.”


“Macros are what make lisp the greatest programming language in the world.”


“If you are looking for a dry coding manual that re-hashes common-sense techniques in whatever langue du jour, this book is not for you. This book is about pushing the boundaries of what we know about programming.”

“Only the top percentile of programmers use Lisp and if you can understand this book you are in the top percentile of Lisp programmers.”

I’m sure that the author has received his fair share of derision for making statements like these and for his clear and well researched content showing how Lisp is the greatest programming language ever invented. It may come off as conceited to some, but he is right. He’s also in good company with the greatest computer scientists ever known who have also made similarly bold statements regarding Lisp. This style is lacking in most technical writing today, there has been too much coddling and dilution in programming texts over the last decade. The fact is that unless you have mastered this language then you are in no position to even begin disussing this topic. This book, much like any of the posts on this site, is not for convincing the majority of Blub programmers that they should be using Lisp but to encourage a few that have an appreciation for the world’s greatest programming language to look even deeper.

Here’s a quote from the first chapter for those that might have known for a long while that there is more to programming than the hype of formal object systems that we’ve all been subjected to for the past few decades :

“Object systems are a formalisation of a subset of let and lambda combinations, sometimes with gimmicks like inheritance bolted on. Because of this, Lisp programmers often don’t think in terms of classes and objects. Let and lambda are fundamental; objects and classes are derivatives. As Steele says, the ‘object’ need not be a primitive notion in programming languages. Once assignable value cells and good old lambda expressions are available, object systems are, at best, occasionally useful abstractions and, at worst, special-case and redundant.”

Buy the book. Programming language choice matters.

“Introduction to Metamathematics” by Stephen Cole Kleene

Not much more needs to be said about this one.

“The Stuff of Thought: Language as a Window into Human Nature” by Steven Pinker

From the Washington Post :

“Language comes so naturally to us that it’s easy to believe there’s some sort of intrinsic logic connecting the thing and its name, the signifier and the signified. In one of Plato’s dialogues, a character named Cratylus argues that “a power more than human gave things their first names.”

But Cratylus was wrong. Human language is an emanation of the human mind. A thing doesn’t care what we call it. Words and their rules don’t tell us about the world; they tell us about ourselves.

That’s the simple premise behind Steven Pinker’s latest work of popular science. According to the Harvard psychologist, people are “verbivores, a species that lives on words.” If you want to understand how the brain works, how it thinks about space and causation and time, how it processes emotions and engages in social interactions, then you need to plunge “down the rabbit hole” of language. The quirks of our sentences are merely a portal to the mind.

In The Stuff of Thought, Pinker pitches himself as the broker of a scientific compromise between “linguistic determinism” and “extreme nativism.” The linguistic determinists argue that language is a prison for thought. The words we know define our knowledge of the world. Because Eskimos have more nouns for snow, they are able to perceive distinctions in snow that English speakers cannot. While Pinker deftly discredits extreme versions of this hypothesis, he admits that “boring versions” of linguistic determinism are probably accurate. It shouldn’t be too surprising that our choice of words can frame events, or that our vocabulary reflects the kinds of things we encounter in our daily life. (Why do Eskimos have so many words for snow? Because they are always surrounded by snow.) The language we learn as children might not determine our thoughts, but it certainly influences them.

Extreme nativism, on the other hand, argues that all of our mental concepts — the 50,000 or so words in the typical vocabulary — are innate. We are born knowing about carburetors and doorknobs and iPods. This bizarre theory, most closely identified with the philosopher Jerry Fodor, begins with the assumption that the meaning of words cannot be dissected into more basic parts. A doorknob is a doorknob is a doorknob. It only takes Pinker a few pages to prove the obvious, which is that each word is not an indivisible unit. The mind isn’t a blank slate, but it isn’t an overstuffed filing cabinet either.

So what is Pinker’s solution? He advocates the middle ground of “conceptual semantics,” in which the meaning of our words depends on an underlying framework of basic cognitive concepts. (As Pinker admits, he owes a big debt to Kant.) The tenses of verbs, for example, are shaped by our innate sense of time. Nouns are constrained by our intuitive notions about matter, so that we naturally parcel things into two different categories, objects and substances (pebbles versus applesauce, for example, or, as Pinker puts it, “hunks and goo”). Each material category comes with a slightly different set of grammatical rules. By looking at language from the perspective of our thoughts, Pinker demonstrates that many seemingly arbitrary aspects of speech, like that hunk and goo distinction, aren’t arbitrary at all: They are byproducts of our evolved mental machinery.

Pinker tries hard to make this tour of linguistic theory as readable as possible. He uses the f-word to explore the topic of transitive and intransitive verbs. He clarifies indirect speech by examining a scene from “Tootsie,” and Lenny Bruce makes so many appearances that he should be granted a posthumous linguistic degree. But profanity from Lenny Bruce can’t always compensate for the cryptic vocabulary and long list of competing ‘isms. Sometimes, the payoff can be disappointing. After a long chapter on curse words — this book deserves an “explicit content” warning — Pinker ends with the banal conclusion that swearing is “connected with negative emotion.” I don’t need conceptual semantics to tell me that.

The Stuff of Thought concludes with an optimistic gloss on the power of language to lead us out of the Platonic cave, so that we can “transcend our cognitive and emotional limitations.” It’s a nice try at a happy ending, but I don’t buy it. The Stuff of Thought, after all, is really about the limits of language, the way our prose and poetry are bound by innate constraints we can’t even comprehend. Flaubert was right: “Language is a cracked kettle on which we beat out tunes for bears to dance to, while all the time we long to move the stars to pity.”