myths and legends
Its ‘bizarre’ syntax, its alleged ability to stand up to low level language performance, the unique and powerful abstractions awarded the programmer who even dares to attempt its almost vertical cliff face of learning. Or so it seemed back then. With its more recent popularity there’s been much hype surrounding its capabilities and especially so in contrast to other languages.
Lisp being an ‘implementation’ of Lambda Calculus for computers and a functional language means that many powerful abstractions are accessible to the programmer from fundamental symbolic expressions to its macros. Such abstractions simply don’t exist in most other languages, specifically the imperitave kind. This has been covered a lot already on the web by some great writers and experienced Lisp hackers.
This essay will not focus on Lisp’s powerful abstractions but on another practical concern of all programmers : performance, specifically compared with C.
I’m not sure what the ratio of texts on Lisp’s abstraction power vs. texts on Lisp’s performance are but its cleary not 1:1.
Some of the reasons have to do with the fact that the Lisp community has been more concerned with the former(and rightly so) over the last five decades than the latter. “CPU time is far cheaper than programmer time” or something like that.
A few decades ago hardware was custom designed for Lisp to run on. Nowadays most hardware is designed with C in mind.
So the author makes no claim that performance has been neglected by the Lisp community in compiler design/implementation but there’s been more hype around the performance capabilities of Lisp than anything else related to the language.
There have been a few texts/papers that claim Lisp can match C in performance. My personal attempts to verify these claims seem to indicate that they are unsubstantiated (no code to demonstrate such claims) or that there was code provided but that it was so trivial(sometimes contrived) and focused on just a subset of Lisp datatypes. Sometimes benchmark code tends to suffer from this.
I think that slightly more complex code should be used to in/validate such claims. The code should be more than a couple hundred lines long but less than a thousand. Maybe it should vary a little more in its use of data types(strings, numbers etc.) and data structures(hashes, arrays, trees). It should solve some problem or play some game to excercise CPU, memory usage and IO. A reference implementation using a chosen algorithm should be implemented first in C. This will give us a performance goal for comparison with a follow-up Lisp implementation using the same algorithm, solving the same problem.
I coded implementations in C and Lisp to get some answers. The details will not be explained here but the code is available at the end of the article. Both implementations use the same algorithm and corresponding data structures as far as possible.
One other rule : the implementations are intentionally not threaded to ensure that they only use one cpu core on the test hardware. The goal is not to measure multi-core performance or algorithmic/datastructure optimizations against each implementation but to instead compare the performance of the code(in time and space usage) that the respective compilers generate using as similar as possible algorithms and datastructures.
numbers & pictures
All the boggle solver implementations were compiled with 64-bit C(GCC 4.3.3) and Lisp (sbcl 1.0.18 & ccl64 1.3-r11954M) compilers. All tests were run on 64-bit Linux (Ubuntu 9.04 amd64).
The test hardware is a 64-bit core2 duo(T7500 @ 2.20GHz) laptop(bus speed 800Mhz) with 2GB RAM (DDR2 @ 667 Mhz).
A few different board sizes were measured, from the standard boggle board size of 4×4 up to 1000×1000.
The fastest of 5 runs of each implementation was used.
Here are the results for running time of the solver implementations in table form :
# dimensions C SBCL CCL 4x4 0.0005 0.068 0.012 20x20 0.010 0.152 0.080 100x100 0.389 3.448 1.964 1000x1000 39.259 177.675 182.522
and as a line graph (please note the logarithmic scale) :
and memory usage of the solver implementations as a bar graph :
thoughts about the results
The memory usage between Lisp and C is fairly comparable (at least for CCL vs. C). Nothing really exciting here.
For speed, as many would expect : on smaller data sets for the Lisp versions there is a higher price in start up time. This is not really an issue for real-world applications. The effects of garbage collection in the 1000×1000 scenario were only 7.720 seconds for SBCL and 4.899 seconds for CCL. As a fraction of the total respective runtimes this is fairly low. Something else is going on with the Lisp version to account for the significantly higher runtimes. The Lisp code has been carefully written to avoid boxing/unboxing of types and to let the compilers know ahead of time what types are being used at critical sections in the code. These standard typing practises do give large improvements in performance compared with performance of the same code if they were not made. In many cases this is about a 50 – 100 times improvement over Lisp code without type declarations.
Now for best case in performance of Lisp vs. C, the Clozure version performs better than SBCL over a range of input data sizes, roughly 5 times slower than C.
The SBCL version vs. C varies in performance from about 5 – 15 times slower than C depending on the input data size. SBCL seems to be slightly faster than CCL for the 1000×1000 board size but uses about 250MB more memory.
These results apparently line up with those from the “Computer Language Benchmarks Game”.
lisp performance in the real world ?
Performance is only one amongst many other considerations when writing good software. In many real-world applications raw CPU performance is not the bottle-neck. its memory and IO. Web applications are a common example of this. However for the other applications where raw CPU performance is key, Lisp still is a viable competitor for many scenarios. its performance here is roughly close to other VM implementations for other languages such as Java or C#. In trivial cases (possibly purely numeric) Lisp may be able to match or even surpass C. Unfortunately trivial cases are hardly the norm.
The hype that has been generated claiming Lisp to outperform C as if this was the general case is annoying and tarnishing to the Lisp community as a whole. It may be completely unfair but that’s the way outsiders and new-comers will view things. What’s worse is that when Haskell hackers make their claims about performance they are actually accurate most of the time. There are some advantages to being relatively later to the party so to speak. Pity about the syntax in some areas but Haskell is still an incredible and practical functional descendant of Lisp.
The gains in abstraction and expression in Lisp go far beyond a language like C and the loss in raw CPU performance is probably worth it for most applications. For the cases where this is not acceptable, re-writing the critical sections in C is fairly straight-forward and would bring the hybrid Lisp/C implementation very close to a pure C version in performance. This may be the topic of a future post here so look out.
To sum it all up : The small factor in difference between C and Lisp really does not matter. As the Lisp compilers improve in the code that they generate over the coming years the gap will close without much work on the Lisp hacker’s part.
Also to note : the tested Lisp compilers were both open source. If anyone would like to test this code on commercial Lisp compilers, please do, I’d love to see the results. I have a feeling that the gap in performance between C and Lisp will be even smaller with commercial Lisp compilers.
Comments on this article :
the code :
“ANSI Common Lisp” – Paul Graham – CH. 13 “Speed”.