Here’s a molecule viewer using OpenGL in Clojure. Implemented so that I could compare this type of development with Common Lisp (and also with using OpenGL development in C). OpenGL with Clojure can be really fun using Zach Tellman’s Penumbra. Coming at OpenGL from a more functional angle (e.g. not mutating global vars etc.) can be a little different but not difficult for anyone who has done enough functional programming. The code is here.

* Update 2010-03-31 * Fixed bug where light was moving with molecule and added some performance tweaks (thanks Zach).

DISCLAIMER :
THIS IS A LANGUAGE RANT. DO NOT READ IF YOUR BPF* IS LESS THAN 0.37.
SIDE EFFECTS MAY INCLUDE SEVERE CONFUSION, UNJUSTIFIED ANGER, GASSY DISCHARGE,
INCOMPREHENSIBLE DIALOG, ERRATIC DEGRADATION OF CHARACTER AND POSSIBLY A
COMPLETE EMOTIONAL MELTDOWN.

Performance is one important aspect that people might want in a language. One language that continually surprises me in this regard is Haskell. Seriously, it’s simply amazing. But performance is only one important consideration when using a language. Abstraction is another.

What about more powerful levels of abstraction such as meta-programming ?

Haskell has very powerful functional idioms but it(and many other languages) are left at the door beyond which even more unfathomable worlds of expression are possible in the way of meta-programming. Unfortunately, as cool as Template Haskell is, it does not gain Haskell entry into the warp speed meta-programming universe. In Lisp there is no distinction between the syntax and the AST. In Template Haskell the AST is modeled using explicit data types.

Beyond the door is another universe where Lisp macros make warp speed exploration possible. Lisp users travel this world freely and playfully looking for the next interesting practical joke to play on other language users(ofcourse that means writing more Lisp code !). Beware : travelling the universe at warp speed is quite dangerous, but probably not any more dangerous than driving your car. Can any other language pass through this door without embracing symbolic expressions (which make Lisp macros possible) ? Why not leave right now and find out ?

Seriously, I’ll be patiently waiting right here until you return. I’m not going anywhere.

The endless neophitic whining about Lisp not having a killer application is annoying. It’s annoying because we all know this is a very old joke. You do know it’s a very old joke don’t you ? There are many killer applications, they have existed for the past 50 years. One reason that they are likely unknown might be because they aren’t social networking sites or any of the other dehumanizing, reductive web 2.0 ilk that is so commonly marketed as ‘innovative’. In other words : certain persons might simply have tunnel vision or be plainly ignorant regarding Lisp’s so-called killer applications. Another reason is that the military, aerospace and other companies use Lisp but you and I will never get to see these applications(let alone the source code !). With Clojure becoming more popular this may all just change in the next few years : the web 2.0 places may start using the language. But really, any application written in Lisp is a killer application !

Something else to carefully note is how often Lisp’s most vehement opponents know the smallest amount of Lisp syntax and sadly they often know nothing more. Or they have enough of a superficial knowledge to argue with those that have been using it for decades. The ditto heads scream about there being too many parenthesis. Funny because Java code often has more parenthesis than Lisp, it’s just that they are in different places !

It’s my own fault, I sometimes try engaging certain superficial functional and Lisp crowds on the internet. Comments are quickly down-voted because of the prevalent mob mentality and inability for independent thought. Have we all not yet evolved beyond peer pressure ? Really ? Sometimes there are some gems when an independent thinker has something to add to the conversation instead of mob screech and splintered egos that ruin the dialog. I basically end up writing my own posts on my own site to summarize and lower the signal-to-noise ratio.

Not many comp-sci people would argue the value of studying boolean algebra, graph theory, algorithms or any of the other fundamental topics related to their field. Why the aversion to Lisp then ? Maybe it’s because some people(including computer scientists) have an aversion to Math ? I believe it’s because they have not taken the time to learn it. It’s not that Lisp is difficult to learn really. In fact newbie programmers find the syntax quite easy. What’s difficult is un-learning the imperative approach that they may have been used to for years. Computers are often treated as binary soup buckets ! Yes, binary soup buckets ! Stir, taste and then repeat the steps until the desired application flavor is acquired for whatever software broth it is that you might be cooking up. Scratch on the same location in memory until the desired outcome is obtained. This is the stuff of real nerd stand-up comedy folks. I could get into the Lisp machine rant but I’ll leave that for another time.

Even if a single useful Lisp application had still not been written yet, it would make no difference at all. Lisp stands on it’s own merits in just the same way that pure mathematics does.

In fact, Lisp basically IS Math : Lambda Calculus, Set Theory etc. but really an implementation of those ideas for a computer. This is one reason that certain mathematicians love Lisp. Interesting to note is that the first computers only existed in Math papers. For reasons why programming languages should be ‘mathy’ and why they are important to theorem proving, read Chaitin. Seriously go read anything by Chaitin right now.

If Lisp was a character from Star Trek it would be Q : despised and misunderstood by the many and loved by the few, which are usually savvy starship captains and smoking-hot babes of various intergalactic species. Women always seem to ‘get-it’ long before men do, why is that ? Something to do with their often higher BPF ?

Thanks for the applause, I’ll be here all week !

*BPF – Brain Plasticity Factor. Where IQ measures ‘intelligence’, BPF is a measure of a person’s ability to consider more than a single perspective. i.e. a flexible vector based intelligence as opposed to a more rigid scalar based intelligence. Low BPFs usually indicate a tendency toward mechanical thinking and often a pre-disposition to fundametal religious outbursts(sometimes fatal), in other words : they just aren’t fun to be around.

High BPFs on the other hand display far greater degrees of intelligence, actual wisdom(not only knowledge), an ability to think independently, increased happiness(not to be so quickly devalued – ask the Ruby guys), an incredibly high-tolerence for those with low BPFs, a certain way with the opposite(or same) sex(remember Einstein with the ladies !) and an all round appreciation for the uniqueness of the human being. Warning : these people can be really fun at parties !

* Update 2009-12-27 * Using lists instead of vectors brings the Clojure version down from around 20x to 10X slower than the CL version. One reader correctly stated that when doing performance comparisons between languages and implementations you want the code and data structures to be as identical as possible.

“A palindrome is a word, phrase, number or other sequence of units that can be read the same way in either direction (the adjustment of punctuation and spaces between words is generally permitted.”

Here we look at an implementation of a function(in Clojure) that finds the longest palindrome in some text. Let’s also look at the worst case time and space performance.

It’s obviously much quicker than the naive version and some Clojure experts might even be able to come up with a few tricks for tweaking the code (possibly using lists instead of vectors and consing instead ?).

user=> (#'i27.palindromes/big-test)
1000 X ' amanaplanacanalpanama '
fast : "Elapsed time: 524.958 msecs"

A Common Lisp implementation looks very similar …

(defun longest-pals-fast (text)
"O(n) time & space complexity"
(declare (optimize (speed 3)
(compilation-speed 0)
(space 0) (debug 0)))
(labels ((final-centers (n tcenters centers)
(cond
((<= n 0) centers)
(t
(let ((n (- n 1)))
(final-centers
n
(rest tcenters)
(cons
(min n (first tcenters))
centers))))))
(ext-centers (text n centers tcenters cdist)
(cond
((= 0 cdist)
(ext-tail text (+ 1 n) 1 centers))
((= (- cdist 1) (first tcenters))
(ext-tail text n (first tcenters) centers))
(t
(ext-centers text n
(cons
(min (- cdist 1) (first tcenters))
centers)
(rest tcenters) (- cdist 1)))))
(ext-tail (text n curr-tail centers)
(cond
((> n (- (length text) 1))
(final-centers curr-tail centers
(cons curr-tail centers)))
((= (- n curr-tail) 0)
(ext-centers text n
(cons curr-tail centers)
centers curr-tail))
((eql (elt text n) (elt text (- n curr-tail 1)))
(ext-tail text (+ 1 n) (+ 2 curr-tail) centers))
(t
(ext-centers text n
(cons curr-tail centers)
centers curr-tail))))
(pal-around-centers (text)
(ext-tail text 0 0 '())))
(pal-around-centers text)))

… except that the CL version is about 20X quicker :

triton:palindromes jgrant$ sbcl --noinform --load palindromes.lisp
--eval "(progn (big-test) (quit))"
1000 X ' amanaplanacanalpanama '
fast :
Evaluation took:
0.024 seconds of real time
0.023747 seconds of total run time (0.022532 user, 0.001215 system)
100.00% CPU
38,248,776 processor cycles
460,656 bytes consed

All code can be found here : Clojure, Common Lisp and Go
(an imperative implementation for comparison but beware it’s buggy !).

A Quine is a program that produces its complete source code as its only output.
Because Clojure is a Lisp, it is especially well suited to writing Quines. This is a quine implemented in Clojure.

((fn [x] (list x (list (quote quote) x))) (quote (fn [x] (list x (list (quote quote) x)))))

A few updates and bug fixes from the first version.
You can now play it here.
ACHTUNG ! : This requires unrestricted perms so you should probably just grab the code and compile it yourself.

Took a shot at implementing a PI digit generator in Clojure using a ‘fast’ algorithm.
It seemed like a decent enough excercise to try and understand something about performance in Clojure.

MacBook Pro – Intel Core 2 Duo 2.26 GHz – 4GB RAM
Java(TM) SE Runtime Environment (build 1.6.0_15-b03-219)
Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-90, mixed mode)
Clojure 1.1.0-alpha-SNAPSHOT (Aug 20 2009) git commit f1f5ad40984d46bdc314090552b76471ee2b8d01

Clojure matches the performance of Java in this example.

The Clojure code :

(import 'java.lang.Math)
(import 'java.math.MathContext)
(import 'java.math.BigDecimal)
(defn sb-pi [places]
"Calculates PI digits using the Salamin-Brent algorithm
and Java's BigDecimal class."
(let [digits (.intValue (+ 10 places)) ;; add some guard digits
round-mode BigDecimal/ROUND_DOWN]
(letfn [(big-sqrt[#^BigDecimal num]
"Calculates square root using Newton's method."
(letfn [(big-sqrt-int
[#^BigDecimal num #^BigDecimal x0 #^BigDecimal x1]
"aux function for calculating square root"
(let [#^BigDecimal x0new x1
#^BigDecimal x1new (-> num (.divide x0new digits round-mode))
#^BigDecimal xsum (+ x1new x0new)
#^BigDecimal x1tot (-> xsum (.divide 2M digits round-mode))]
(if (= x0 x1)
x1tot
(recur num x1 x1tot))))]
(big-sqrt-int
num 0M (BigDecimal/valueOf
(Math/sqrt (. num doubleValue))))))
(sb-pi-int [#^BigDecimal a #^BigDecimal b
#^BigDecimal t #^BigDecimal x #^BigDecimal y]
"aux function for calculating PI"
(let
[#^BigDecimal y1 a
#^BigDecimal absum (+ a b)
#^BigDecimal a1 (-> absum (.divide 2M digits round-mode))
#^BigDecimal b1 (big-sqrt (* b y1))
#^BigDecimal ydiff (- y1 a1)
#^BigDecimal t1 (- t (* x ydiff ydiff))
#^BigDecimal x1 (* x 2M)]
(if (== a b)
(let [#^BigDecimal absum1 (+ a1 b1)
#^BigDecimal absqrd (* absum1 absum1)
#^BigDecimal u (* t1 4M)]
(-> absqrd
(.divide u digits round-mode)
(.setScale places round-mode)))
(recur a1 b1 t1 x1 y1))))]
(sb-pi-int 1M (-> 1M (.divide #^BigDecimal (big-sqrt 2M) digits round-mode))
(/ 1M 4M) 1M nil))))
(time (println (sb-pi (Integer/parseInt (second *command-line-args*)))))
$ time clj pi.clj 1 --> 3.403 msecs
$ time clj pi.clj 10 --> 3.956 msecs
$ time clj pi.clj 100 --> 10.630 msecs
$ time clj pi.clj 1000 --> 141.937 msecs
$ time clj pi.clj 10000 --> 3316.180 msecs

The same algorithm in Java (but using iteration instead of recursion) :

import java.math.BigDecimal;
import static java.math.BigDecimal.*;
class Pi {
private static final BigDecimal TWO = new BigDecimal(2);
private static final BigDecimal FOUR = new BigDecimal(4);
private static int ROUND_MODE = ROUND_DOWN;
public static void main(String[] args) {
long start = System.nanoTime();
System.out.println(pi(Integer.parseInt(args[0])));
System.out.println("Elapsed time: " +
((System.nanoTime() - start) / 1E6) + " msecs");
}
// Salamin-Brent Algorithm
public static BigDecimal pi(final int digits) {
final int SCALE = 10 + digits;
BigDecimal a = ONE;
BigDecimal b = ONE.divide(sqrt(TWO, SCALE), SCALE, ROUND_MODE);
BigDecimal t = new BigDecimal(0.25);
BigDecimal x = ONE;
BigDecimal y;
while (!a.equals(b)) {
y = a;
a = a.add(b).divide(TWO, SCALE, ROUND_MODE);
b = sqrt(b.multiply(y), SCALE);
t = t.subtract(x.multiply(y.subtract(a).multiply(y.subtract(a))));
x = x.multiply(TWO);
}
return a.add(b)
.multiply(a.add(b))
.divide(t.multiply(FOUR), SCALE, ROUND_MODE)
.setScale(digits, ROUND_MODE);
}
// square root method (Newton's)
public static BigDecimal sqrt(BigDecimal A, final int SCALE) {
BigDecimal x0 = new BigDecimal("0");
BigDecimal x1 = new BigDecimal(Math.sqrt(A.doubleValue()));
while (!x0.equals(x1)) {
x0 = x1;
x1 = A.divide(x0, SCALE, ROUND_MODE);
x1 = x1.add(x0);
x1 = x1.divide(TWO, SCALE, ROUND_MODE);
}
return x1;
}
}
$ time java Pi 1 ----> 2.162 msecs
$ time java Pi 10 ----> 2.425 msecs
$ time java Pi 100 ----> 7.897 msecs
$ time java Pi 1000 ----> 150.610 msecs
$ time java Pi 10000 ----> 3009.705 msecs