Overtone is an open source audio environment being created to explore musical ideas from synthesis and sampling to instrument building, live-coding and collaborative jamming.

In this video Sam Aaron gives a fast-paced introduction to a number of key live programming techniques such as triggering instruments, scheduling future events and synth design. Finally, the viewer is shown how a simple musical sequence may be composed and then converted into an intricate Reich phase. The main body of the video was recorded in one take and features an Emacs buffer (using the Live Coding Config (is.gd/​live_coding_emacs) for editing text and communicating with Overtone (is.gd/​overtone), an expressive Clojure front-end to SuperCollider. Clojure is a state-of-the-art functional lisp emphasising immutability and concurrency (clojure.org).

Shen/Kl arrive

The first publicly available version of Shen/Kl has been released.

The Shen mission is to develop an ultra-portable version of Qi that can run under a wide variety of platforms and which incorporates missing features in Qi such as streams. The targeted platforms are CL, Clojure, NewLisp, Emacs Lisp, ECL, Scheme, Javascript and Python as well as C/C++, DVM, CLR or the JVM. This approach involved rewriting and redesigning Qi to fit within the smallest feasible instruction set.

Kl is intentionally a small ‘Lisp assembly language’ that enables Qi to be built on top of it.

more …

Shen Downloads.

ClojureScript Demo : Convex Hull

Update : bug-fix when hull was being incorrectly calculated due to there being duplicate points generated in the random set.

ClojureScript looks like a solid approach to building applications that target JavaScript VMs. It’s built on top of Google’s Closure Compiler/Library which is very intruiging and is the best approach that they could have taken (now that I’ve a played with it a little). Being new to both Closure and ClojureScript I was curious about what it might feel like to build an application using these tools. I’ve mostly despised programming in JavaScript for browsers no matter what hyped-up library is available (this includes JQuery which is the best of a bad bunch in my opinion). So I decided to write a ClojureScript application that runs in the browser based on a previous Clojure implementation of a Convex Hull algorithm with heuristics.

This was a piece of cake. I really like the pre-compiled approach that relies on the Closure compiler/library. It just feels like you’re writing a regular application instead of trying to force the browser to do the ‘correct’ thing with run-time code and the DOM. There are a few differences that I ran into, a few functions don’t yet exist and using macros is not as clean as I’d expect. Macros have to be implemented in Clojure and then referenced from ClojureScript. No big deal really.

Here’s the demo

Here’s all the UI code. Not much really at < 100 lines. Very cool.

(def edge-stroke (graphics/Stroke. 1 "#444"))
(def blue-edge-stroke (graphics/Stroke. 1 "#66b"))
(def green-edge-stroke (graphics/Stroke. 1 "#0f0"))
(def white-fill (graphics/SolidFill. "#fff"))
(def blue-fill (graphics/SolidFill. "#66b"))
(def green-fill (graphics/SolidFill. "#0f0"))
(def trans-fill (graphics/SolidFill. "#0f0" 0.001))

(def g
  (doto (graphics/createGraphics "440" "440")
    (.render (dom/getElement "graph"))))

(defn draw-graph 
    (let [canvas-size (. g (getPixelSize))]
     (.drawRect g 0 0 
            (.width canvas-size) (.height canvas-size) 
            edge-stroke white-fill)))

(defn scale-coord
  (+ 20 (* 4 coord)))

(defn draw-points
    [points stroke fill]
    (doseq  [[x y :as pt] points]
        (.drawEllipse g (scale-coord x) (scale-coord y) 
               3 3 stroke fill)))

(defn draw-convex-hull
    [points stroke fill]
    (let [path (graphics/Path.)
      [xs ys :as start] (first points)]
     (.moveTo path (scale-coord xs) (scale-coord ys))
     (doall (map (fn [[x y :as pt]]
             (.lineTo path (scale-coord x) (scale-coord y)))
             (rest points)))
     (.lineTo path (scale-coord xs) (scale-coord ys))
    (.drawPath g path stroke fill)))

(defn print-points
    [points el]
    (doseq [pair points]
       (dom/append el
               (str " [" (first pair) " " (second pair) "]"))))

(defn ^:export rundemo
  (let [cnt 1E2
        rpts (apply 
             (map (fn [n] 
                  [(rand-int (inc cnt))
                   (rand-int (inc cnt))])
              (range 1 cnt [])))
       text-input-title (dom/getElement "text-input-title")
       text-input (dom/getElement "text-input")
       text-results-status (dom/getElement "text-results-status")
       text-results (dom/getElement "text-results")]
       ;; draw all points
    (str "Random generation of " cnt " points...")) 
       (draw-points rpts blue-edge-stroke blue-fill)
       (print-points rpts text-input)
       ;; calc hull
    (str "Calculating convex hull ...")) 
       (let [r1 (randomset rpts false)
         r2 (randomset rpts true)]
         (dom/append text-results-status (str " done.n")) 
         ;; update the results
         (print-points r2 text-results)
          (str "Convex hull has " (count r1) " points.n"))
         ;; draw hull points
         (draw-points r1 green-edge-stroke green-fill)
         ;; draw hull
         (draw-convex-hull r1 green-edge-stroke trans-fill)
         ;; return the results
         [rpts r2])))

;; Auto-update
(defn ^:export poll
  (let [timer (goog.Timer. 15000)]
    (do (rundemo)
        (. timer (start))
        (events/listen timer goog.Timer/TICK rundemo))))

The future of client-side programming just got way better thanks to Rich and team !
All code is here.

Purely Functional Data Structures & Algorithms : Fast Fourier Transform in Qi

In this second post in this series we look at an implementation of the always useful Fast Fourier Transform.

(FFT) An algorithm for computing the Fourier transform of a set of discrete data values. Given a finite set of data points, for example a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also solves the essentially identical inverse problem of reconstructing a signal from the frequency data.

The FFT is a mainstay of numerical analysis. Gilbert Strang described it as “the most important algorithm of our generation”. The FFT also provides the asymptotically fastest known algorithm for multiplying two polynomials.

Our implementation comes in at just under 100 lines of code

(declare atan [number --> number])
(define atan X -> (ATAN X))

(declare cos [number --> number])
(define cos X -> (COS X))

(declare sin [number --> number])
(define sin X -> (SIN X))

(tc +)

 Complex numbers 

(datatype complex
    Real : number; Imag : number;
    [Real Imag] : complex;)

(define complex-mult
  {complex --> complex --> complex}
  [R1 I1] [R2 I2] -> [(- (* R1 R2) (* I1 I2))
                      (+ (* R1 I2) (* I1 R2))])

(define complex-add
  {complex --> complex --> complex}
  [R1 I1] [R2 I2] -> [(+ R1 R2) (+ I1 I2)])

(define complex-diff
  {complex --> complex --> complex}
  [R1 I1] [R2 I2] -> [(- R1 R2) (- I1 I2)])

 Fast Fourier Transform 

(define butterfly-list
    {((list complex) * ((list complex) * (list complex)))
     --> ((list complex) * ((list complex) * (list complex)))}
    (@p X (@p X1 X2)) -> (if (empty? X)
                             (@p X (@p (reverse X1) (reverse X2)))
                              (@p (tail (tail X))
                                  (@p (cons (head X) X1)
                                      (cons (head (tail X)) X2))))))

(define calc-results
    {(((list complex) * (list (list complex))) * 
                        ((list complex) * (list complex)))
     --> (((list complex) * (list (list complex))) * 
                            ((list complex) * (list complex)))}
    (@p (@p [W WN] [YA YB]) (@p Y1 Y2)) ->
    (if (and (empty? Y1) (empty? Y2))
        (@p (@p [W WN] [(reverse YA) (reverse YB)]) (@p Y1 Y2))
         (@p (@p [(complex-mult W WN) WN]
                 [(cons (complex-add  (head Y1) (complex-mult W (head Y2))) YA)
                 (cons (complex-diff (head Y1) (complex-mult W (head Y2))) YB)])
             (@p (tail Y1) (tail Y2))))))

(define fft
    {number --> complex --> (list complex) --> (list complex)
     --> (list complex)}
    1 WN X Y -> [(head X)]
    2 WN X Y -> [(complex-add  (head X) (head (tail X)))
                 (complex-diff (head X) (head (tail X)))]
    N WN X Y -> (let M   (round (/ N 2))
                     Inp (butterfly-list (@p X (@p [] [])))
                     X1  (fst (snd Inp))
                     X2  (snd (snd Inp))
                     Y1  (fft M (complex-mult WN WN) X1 [])
                     Y2  (fft M (complex-mult WN WN) X2 [])
                     W   [1 0]
                     Res (calc-results (@p (@p [W WN] [[] []]) (@p Y1 Y2)))
                     (append (head (snd (fst Res)))
                             (head (tail (snd (fst Res)))))))

(define dotimes-fft
    {number --> number --> complex --> (list complex) --> (list complex)
    --> (list complex)}
    Iterations Size W Input Res ->
    (if ( number --> (list complex) 
     --> (list complex)}
    Iterations Size Input -> (let Pi    (* 4 (atan 1))
                                  Theta (* 2 (/ Pi Size))
                                  W     [(cos Theta) (* -1 (sin Theta))]
                                  (dotimes-fft Iterations Size W Input [])))

Let’s give it a spin …

 Square wave test 

(26-) (time (run-fft 100000 16 
             [[0 0] [1 0] [0 0] [1 0] [0 0] [1 0] [0 0] [1 0]
              [0 0] [1 0] [0 0] [1 0] [0 0] [1 0] [0 0] [1 0]]))

Evaluation took:
  2.999 seconds of real time
  2.942718 seconds of total run time (2.798716 user, 0.144002 system)
  [ Run times consist of 0.371 seconds GC time, and 2.572 seconds non-GC time. ]
  98.13% CPU
  6,282,874,678 processor cycles
  1,641,619,888 bytes consed

[[8 0] [0.0 0.0] [0.0 0.0] [0.0 0.0] 
 [0.0 0.0] [0.0 0.0] [0.0 0.0] [0.0 0.0]
 [-8 0] [0.0 0.0] [0.0 0.0] [0.0 0.0] 
 [0.0 0.0] [0.0 0.0] [0.0 0.0] [0.0 0.0]] : (list complex)

All Qi code in this post is here.

Purely Functional Data Structures & Algorithms : Red-Black Trees in Qi

Update 2011/06/28 : Source has been modified to compile with Shen

This is the first in a series of posts that will demonstrate the implementation of many well-known(and less known) data structures and algorithms using a purely functional approach.
We will use Qi as our implementation language for a number of reasons :

    It’s a Lisp : macros, EVAL, hash-tables, property-lists, meta-programming etc.
    Pattern matching.
    Optional static type checking.
    A Turing-complete type system !

In this first post we look at an implementation of the well-known Red-Black tree abstract data type in Qi.

A red–black tree is a type of self-balancing binary search tree, a data structure used in computer science, typically to implement associative arrays. The original structure was invented in 1972 by Rudolf Bayer and named “symmetric binary B-tree,” but acquired its modern name in a paper in 1978 by Leonidas J. Guibas and Robert Sedgewick. It is complex, but has good worst-case running time for its operations and is efficient in practice: it can search, insert, and delete in O(log n) time, where n is total number of elements in the tree. Put very simply, a red–black tree is a binary search tree that inserts and removes intelligently, to ensure the tree is reasonably balanced.

Our implementation comes in at 57 lines of code (with the balance function at only 7 lines)

(tc +)

(datatype tree-node
    Key : number; Val : B;
    [Key Val] : tree-node;)

(datatype color
    if (element? Color [red black])
    Color : color;)

(datatype tree
    if (empty? Tree)
    Tree : tree;

    Color : color; LTree : tree; TreeNode : tree-node; RTree : tree;
    [Color LTree TreeNode RTree] : tree;)

(define node-key
    {tree-node --> number}
    [Key Val] -> Key)

(define make-tree-black
    {tree --> tree}
    [Color A X B] -> [black A X B])

(define member
    {tree-node --> tree --> boolean}
    X NIL -> false
    X [Color A Y B] -> (if (< (node-key X) (node-key Y))
         (member X A)
         (if (< (node-key Y) (node-key X))
             (member X B)

(define balance
    {tree --> tree}
    [black [red [red A X B] Y C] Z D] -> [red [black A X B] Y [black C Z D]]
    [black [red A X [red B Y C]] Z D] -> [red [black A X B] Y [black C Z D]]
    [black A X [red [red B Y C] Z D]] -> [red [black A X B] Y [black C Z D]]
    [black A X [red B Y [red C Z D]]] -> [red [black A X B] Y [black C Z D]]
    S -> S)

(define insert-
    {tree-node --> tree --> tree}
    X [] -> [red [] X []]
    X [Color A Y B] -> (if (< (node-key X) (node-key Y))
                           (balance [Color (insert- X A) Y B])
                           (if (< (node-key Y) (node-key X))
                               (balance [Color A Y (insert- X B)])
                               [Color A Y B])))

(define insert
  {tree-node --> tree --> tree}
  X S -> (make-tree-black (insert- X S)))

This is a reasonably performant implementation (we haven’t even tried to optimize it yet).

(19-) (run-tests NIL)
tree: [black
       [red [black [red [] [1 1] []] [2 2] [red [] [5 5] []]] [7 7]
        [black [red [] [8 8] []] [11 11] []]]
       [14 14] [black [] [15 15] []]]
12 is a member ? false
8 is a member ? true

Creating tree with 100000 elements ...
Evaluation took:
  0.578 seconds of real time
  0.562833 seconds of total run time (0.491572 user, 0.071261 system)
  [ Run times consist of 0.160 seconds GC time, and 0.403 seconds non-GC time. ]
  97.40% CPU
  1,210,617,335 processor cycles
  168,551,696 bytes consed

Performing lookups in tree with 100000 elements ...
666 in tree ? true
Evaluation took:
  0.000 seconds of real time
  0.000044 seconds of total run time (0.000035 user, 0.000009 system)
  0.00% CPU
  86,110 processor cycles
  0 bytes consed

-1 in tree ?
Evaluation took:
  0.000 seconds of real time
  0.000024 seconds of total run time (0.000021 user, 0.000003 system)
  100.00% CPU
  46,368 processor cycles
  0 bytes consed

A comparable implementation in Java/C++ will usually run a few hundred lines of code.
All Qi code in this post is here.

Chaitin Proving Darwin

White paper : To a mathematical theory of evolution and biological creativity

We present an information-theoretic analysis of Darwin’s theory of
evolution, modeled as a hill-climbing algorithm on a fitness landscape.
Our space of possible organisms consists of computer programs, which
are subjected to random mutations. We study the random walk of in-creasing
fitness made by a single mutating organism. In two different
models we are able to show that evolution will occur and to characterize
the rate of evolutionary progress, i.e., the rate of biological creativity

For many years we have been disturbed by the fact that there is no fundamental
mathematical theory inspired by Darwin’s theory of evolution.
This is the fourth paper in a series attempting to create
such a theory.

In a previous paper we did not yet have a workable mathematical frame-work:
We were able to prove two not very impressive theorems, and then the
way forward was blocked. Now we have what appears to be a good mathematical
framework, and have been able to prove a number of theorems. Things
are starting to work, things are starting to get interesting, and there are many
technical questions, many open problems, to work on.

So this is a working paper, a progress report, intended to promote interest
in the field and get others to participate in the research. There is much to be

Spring/Summer 2011 Books

Marvin Minksy’s – The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind

Minsky argues that emotions are different ways to think that our mind uses to increase our intelligence. He challenges the distinction between emotions and other kinds of thinking. His main argument is that emotions are “ways to think” for different “problem types” that exist in the world. The brain has rule-based mechanism (selectors) that turns on emotions to deal with various problems. The book reviews the accomplishments of AI, what and why is complicated to accomplish in terms of modeling how human beings behave, how they think, how they experience struggles and pleasures. (Wikipedia)

The Moral Landscape – Sam Harris

In this explosive new book, Sam Harris tears down the wall between scientific facts and human values, arguing that most people are simply mistaken about the relationship between morality and the rest of human knowledge. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible.

Bringing a fresh perspective to age-old questions of right and wrong, and good and evil, Harris demonstrates that we already know enough about the human brain and its relationship to events in the world to say that there are right and wrong answers to the most pressing questions of human life. Because such answers exist, moral relativism is simply false—and comes at increasing cost to humanity. And the intrusions of religion into the sphere of human values can be finally repelled: for just as there is no such thing as Christian physics or Muslim algebra, there can be no Christian or Muslim morality.

Using his expertise in philosophy and neuroscience, along with his experience on the front lines of our “culture wars,” Harris delivers a game-changing book about the future of science and about the real basis of human cooperation.

In the Plex: How Google Thinks, Works, and Shapes Our Lives – Steven Levy

Few companies in history have ever been as successful and as admired as Google, the company that has transformed the Internet and become an indispensable part of our lives. How has Google done it? Veteran technology reporter Steven Levy was granted unprecedented access to the company, and in this revelatory book he takes readers inside Google headquarters—the Googleplex—to show how Google works.

While they were still students at Stanford, Google cofounders Larry Page and Sergey Brin revolutionized Internet search. They followed this brilliant innovation with another, as two of Google’s earliest employees found a way to do what no one else had: make billions of dollars from Internet advertising. With this cash cow (until Google’s IPO nobody other than Google management had any idea how lucrative the company’s ad business was), Google was able to expand dramatically and take on other transformative projects: more efficient data centers, open-source cell phones, free Internet video (YouTube), cloud computing, digitizing books, and much more.

The key to Google’s success in all these businesses, Levy reveals, is its engineering mind-set and adoption of such Internet values as speed, openness, experimentation, and risk taking. After its unapologetically elitist approach to hiring, Google pampers its engineers—free food and dry cleaning, on-site doctors and masseuses—and gives them all the resources they need to succeed. Even today, with a workforce of more than 23,000, Larry Page signs off on every hire.

But has Google lost its innovative edge? It stumbled badly in China—Levy discloses what went wrong and how Brin disagreed with his peers on the China strategy—and now with its newest initiative, social networking, Google is chasing a successful competitor for the first time. Some employees are leaving the company for smaller, nimbler start-ups. Can the company that famously decided not to be evil still compete?

No other book has ever turned Google inside out as Levy does with In the Plex.

Happy PI day ! (in QiII)

Qi is the future of Lisp.
It is Lisp with many great features such as pattern-matching, a turing complete static type system (even more powerful than Haskell’s type system) and many others.

So in the spirit of PI day, here’s an implementation that calculates PI using Machin’s formula.

(define ceiling 
  X -> (CEILING X))
(declare ceiling [number --> number])

(define expt 
  X Y -> (EXPT X Y))
(declare expt [number --> number --> number])

(define floor 
  X Y -> (FLOOR X Y))
(declare floor [number --> number --> number])

(define log
  X Y -> (LOG X Y))
(declare log [number --> number --> number])

(tc +)

(define arccot-
  {number --> number --> number --> number --> number --> number} 
  X N XPOWER    0 _ -> 0
  X N XPOWER TERM 1 -> (+ (arccot- X (+ N 2) (floor XPOWER X) 
                                     (floor XPOWER N) 0) (floor XPOWER N))
  X N XPOWER TERM 0 -> (- (arccot- X (+ N 2) (floor XPOWER X) 
                                      (floor XPOWER N) 1) (floor XPOWER N)))

(define arccot
  {number --> number --> number}
  X UNITY -> (let XPOWER (floor (/ UNITY X) 1)
                  (arccot- (* X X) 1 XPOWER (floor XPOWER 1) 1)))

(define machin-pi
  {number --> number} 
  DIGITS -> (let GUARD (+ 10 (ceiling (log DIGITS 10)))
                 UNITY (expt 10 (+ DIGITS GUARD))
                 (floor (* 4 (- (* 4 (arccot 5 UNITY)) 
                                (arccot 239 UNITY))) (expt 10 GUARD))))

And the output …

(time (machin-pi 10000))

Evaluation took:
  0.379 seconds of real time
  0.372112 seconds of total run time (0.269730 user, 0.102382 system)
  [ Run times consist of 0.055 seconds GC time, and 0.318 seconds non-GC time. ]
  98.15% CPU
  903,769,308 processor cycles
  78,698,160 bytes consed

314159265358979323846264338327950 ... 1655256375678 : number

Compared with Common Lisp, Haskell and Clojure.

Philosophy and Lisp

Programming language wars don’t have to be religious based wars. Programming languages should be rooted in philosophy. The more a programming language is rooted in sound philosophy the more value it has.

Over the years, many of the posts on this blog have been regarding some programming language, algorithm or technology. Some posts have highlighted why Lisp is the most powerful and useful programming language paradigm available to man at this point in the history of computer science.

Explicitly pointing out examples of Lisp code is always insightful and important (at least to those open to evidence and reason).

Still there are people who cannot(or will not?) grasp just why Lisp is, has been(for the past half-century) and will be so important to the development and growth of computer science. For example, some people, in spite of having read Paul Graham’s clear essays on Lisp (which make it really easy to grasp why Lisp is important), still often seem to parrot incoherent illogical arguments and myths against Lisp.

My goal with many of the blog posts here have been my attempt to bring some understanding to folks interested in Lisp and computer science related topics that are based on integrity and therefore are of real value to those that pursue them.

Within computer science, academia and industry there are too many disparate choices presented to the various stake holders from the cubicle dwellers all the way up to the CEOs and Professors. The elephant in the room with all these choices(and what most of them have in common) is that they are lacking in integrity and value. Profit, control, ignorance, altruism, stupidity, inexperience, grant money, incapability, kick backs, bonuses, salaries, titles, fear and many other reasons explain why integrity and value are lacking.

The same has happened with all of the other sciences too and from a philosophical stand-point the causes are very much similar.

At this point some may ask why philosophy is even important to computer science and let alone a programming language called Lisp. The kind of person that usually asks this question is usually the kind of person that has never understood why philosophy itself is so important. Well, just how important is philosophy ? The short answer is that, after life itself, philosophy is the second most important thing to a human being.

It’s critical that after stating the high importance of philosophy that I quickly define what I mean by Philosopy. By philosophy I mean the study of the fundamental nature of knowledge, reality and existence using the tools of observation, evidence, empiricism, logic and reason. This is the classical philosophy of Aristotle and Socrates which is rational absolutism. It is NOT the charlatan ‘philosophy’ of mysticism, positivism, relativism, perspectivism, nihilism and altruism of Plato, Marx, Imannuel Kant, Kierkegaard, Hegel and so many others whose theories have tragically played out in human history and some of which unfortunately are still continually adhered to right up until now. They are more correctly called for what they are : ideologies or religions. Religion is irrational absolutism. The philosophy I am talking about is made distinct in that it is rational absolutism. It is therefore not for bar-room outbursts or musings between tenured professors in dusty old buildings. It is not the salesy popular positive-thinking conventions, the caffeine-overdose incoherent babbling at church or AA gatherings. It is not the foggy positive upbeat tangled ramblings of relativism at burning man. It is a study that has practical applications right from the start.

Without philosophy you would not be reading this blog post. You would not have a computer, there would be no internet. Society would not have produced books, hygiene would not exist, the enlightenment would never have happened. Mathematics and the sciences would never have advanced to where they are today and we would not be benefiting from them if it weren’t for philosophy. From the dysfunctional quirks to the atrocities perpetuated by conflict around this planet, which we all witness each day in society, the root cause is the problem of ideologies and religions usurping the rightful place of philosophy. Philosophy is a matter of life and death. Philosophy is as critical to ethics and morality as it is to mathematics and science. This has been conclusively proved from first principles and so I will not do it here. The human race has advanced in technology further than anyone could have imagined and yet we still resort to coercion and violence at all levels of society. This is because we have not based our ethics and morality on philosophy. Instead we have given these responsibilities of moral and ethical definition to authority figures : the government, industry, academia, the church and well-intentioned but dishonest and flawed parental coercive attempts that do incalculable damage to children and then play out in our societies through their adult life in crime and violence or if we’re lucky it’s merely benign stupidty and arrested personal development.

Well, that’s quite the detour but it’s important to highlight the importance of Philosophy.
Here I put forward that Lisp’s outstanding importance to computer science compared with other programming languages is based on it’s solid philosophical foundation. This is quite simple to prove and I will do so in a few paragraphs.
Lisp is based on Lambda Calculus. Lambda Calculus is a formal system for function definition, function application and recursion. Lisp’s contribution to programming language theory is unfortunately, for the most part, unrecognized by the majority of programmers today. For example: Lisp and typed lambda calculii serve as the foundation for modern type systems. On the other end there is no equivalent of Lisp’s concept of macros that exists in any other programming language even up until today. If there were then that programming language would be a Lisp implementation.

Let’s look further down at Lisp. I stated that Lisp is based on the formal system of lambda calculus. The formal system of lambda calculus is based on functions, logic and predicates, recursion and a number of other important concepts that are fundamental concepts in mathematics. It would follow that the greater the fidelity a programming language has to these mathematical concepts and the more it builds upon them then the more powerful the programming language will be. History provides the evidence in that there is no other programming language that has done this better than Lisp.

We could go even deeper and ask why these fundamental mathematical concepts are so crucial. The answer will then take us into philosophy upon which mathematics is based upon. Sound philosophy demanded that these mathematical concepts be tested by evidence, logic and rigor from some very basic premises that were built up to more complex and powerful structures of thought which were proved to be true. Metaphorically: mathematics and the sciences are trees which can only grow in the soil of philosophy. The reasons are plain as to why religion, superstition and mysticism are not responsible for putting man on the moon or leading to the discovery of DNA.

The scientific/mathematical side of Lisp is just half of the explanation though. The other half of Lisp is the ethical and moral side. Stay with me. Most programmers hardly ever associate a programming language with ethics and morality but they do play a role in the design and use of a language. This is because human beings must use these programming languages. They must fill their minds with the concepts and limitations that these programming languages require for their use and application. When a language designer decrees(when he should instead be deducting) that some of the features that were available to him in the design are too powerful for the users of his language then he is in the realm of morality and ethics and as such is subject to valid moral and ethical scrutiny, which are in turn based on rational and evidence based philopsophy. You may be a computer scientist but you are still to be held morally and ethically responsible for your creation and what it subjects the minds of your users to. On a daily basis and for the masses of programmers, their language is unfortunately seen as a religious preference. It is an ideology forced upon them by their indoctrination from their peer group in academia or industry. Many are not even aware that their every day programming language even matters. Most just don’t even care. They are unaware of how the languages that they use effects their ability to think and solve problems.

Lisp’s design was such that it considered the user of the language equally important as the designer of the language. This shows in that Lisp has compile time and run time macros which effectively allow the user to change the language itself at it’s most basic level if they so desire. Contrast this design with the dictatorial designs of popular languages in industry. On the other hand Common Lisp’s design takes the freedom of the user even more seriously being a multi-paradigm Lisp.

In conclusion I don’t want to suggest that everyone should be using Lisp against their will. That would run counter to the philosophy of Lisp. Lisp is not a religion in the way other programming languages are seen. The myth of the newbie-eating Lisp hacker is just that, a myth. Lisp is embraced by the minority just as the sciences are. It has been shown that Lisp is based on sound philosophical principles and that these have resulted in it being the most successful(not popular) programming language in history. It’s contribution to programming language theory is remarkable. It has also imparted enjoyment, programming power and cognitive freedom to it’s users like no other programming language has.

Keep Lisping !