Latent Semanti Indexing (LIS) is used widely today in Semantic Search and has many other uses in Deep Machine Learning. This is a pretty good explanation/visualization from 40 years ago.

# Two sorts with Rust

Here are some initial thoughts on Rust in the almost two years since I last looked at it along with some implementations of merge and quick sort. (These are just my opinions so please don’t panic !)

1. Cargo is awesome for managing package dependencies and building projects.

2. Rust is a very nice systems programming language that supports a functional style of programming. Much easier to work with than C/C++ with very near the same performance in many cases.

3. Strong static inferred type system !

4. The authors have not neglected the rich and varied history of programming language theory while trying to be very pragmatic about the design and usefulness of the language.

Let’s look at implementing some popular sorting algorithms. First … quick sort.

I won’t be pedantic here by explaining the quick sort algorithm but an elegant and performant implementation is ~20 LOC. Not bad for a systems level programming language.

pub fn quicksort_rec(nums: Vec<u64>) -> Vec<u64> { return match nums.len() { cnt if cnt <= 1 => nums, cnt => { let mut left = Vec::new(); let mut right = Vec::new(); let pivot = nums[0]; for i in 1..cnt { match nums[i] { num if num < pivot => left.push(num), num => right.push(num), } } let mut left_sorted = quicksort_rec(left); let mut right_sorted = quicksort_rec(right); left_sorted.push(pivot); left_sorted.append(&mut right_sorted); return left_sorted; }, }; }

An implementation of merge sort is a bit longer at ~35 LOC.

fn merge(mut left: Vec<u64>, mut right: Vec<u64>) -> Vec<u64> { let mut merged = Vec::new(); while !left.is_empty() && !right.is_empty() { if left.last() >= right.last() { merged.push(left.pop().unwrap()); } else { merged.push(right.pop().unwrap()); } } while !left.is_empty() { merged.push(left.pop().unwrap()); } while !right.is_empty() { merged.push(right.pop().unwrap()); } merged.reverse(); return merged; } pub fn mergesort_rec(nums: Vec<u64>) -> Vec<u64> { return match nums.len() { cnt if cnt <= 1 => nums, cnt => { let mut left = Vec::new(); let mut right = Vec::new(); let middle = cnt / 2; for i in (0..middle).rev() { left.push(nums[i]); } for i in (middle..cnt).rev() { right.push(nums[i]); } left = mergesort_rec(left); right = mergesort_rec(right); return merge(left, right); }, }; }

Lastly, here are the timings for my very CPU under-powered laptop …

# Ring probabilities in F#

A few months back I took a look at Elixir. More recently I’ve been exploring F# and I’m very pleased with the experience so far. Here is the ring probabilities algorithm implemented using F#. It’s unlikely that I will ever use Elixir again because having a powerful static type system provided by F# at my disposal is just too good.

let rec calcStateProbs (prob: float, i: int, currProbs: float [], newProbs: float []) = if i < 0 then newProbs else let maxIndex = currProbs.Length-1 // Match prev, next probs based on the fact that this is a // ring structure. let (prevProb, nextProb) = match i with | i when i = maxIndex -> (currProbs.[i-1], currProbs.[0]) | 0 -> (currProbs.[maxIndex], currProbs.[i+1]) | _ -> (currProbs.[i-1], currProbs.[i+1]) let newProb = prob * prevProb + (1.0 - prob) * nextProb Array.set newProbs i newProb calcStateProbs(prob, i-1, currProbs, newProbs) let calcRingProbs parsedArgs = // Probs at S = 0. // Make certain that we are positioned at only start location. // e.g. P(Start Node) = 1 let startProbs = Array.concat [ [| 1.0 |] ; [| for _ in 1 .. parsedArgs.nodes - 1 -> 0.0 |] ] let endProbs = List.fold (fun probs _ -> calcStateProbs(parsedArgs.probability, probs.Length-1, probs, Array.create probs.Length 0.0)) startProbs [1..parsedArgs.states] endProbs

Here’s the code.

No promises this time but I may follow this sequential version up with a parallelized version.

# Ring probabilities with Elixir

I’ve been hearing more about Elixir lately so I thought I’d take it for a spin.

*“Elixir is a functional, meta-programming aware language built on top of the Erlang VM. It is a dynamic language that focuses on tooling to leverage Erlang’s abilities to build concurrent, distributed and fault-tolerant applications with hot code upgrades.”
*

I’ve never really spent any time with Erlang but always been curious about it and the fact that it’s one of the best kept ‘secrets’ in many startups these days. I’ve heard for years how easy it is to ‘scale out’ compared with many other languages and platforms.

Joe Armstrong, the creator of Erlang, wrote a post about Elixir in which he seemed to really like it except for some minor things. This got me even more curious so I decided to write some code that seemed like it could benefit from the features provided by Elixir for easily making suitable algorithms parallelizable.

Let’s talk about Ring probabilities. Let’s say we had a cluster of N nodes in a ring topology. We then might have some code that requires S steps to be run and each subsequent step is run on a node to the right of the previous node with some probability P.

In the initial state (S=0) the probability of some piece of code running on node A is P=1.

At the next step (S=1) the probability of the step running on a node to the right in the ring is P and the probability of the step running on a node to the left is 1-P.

Here is an example with some crude ascii diagrams to visually represent this :

Initial node probablity for 5 node ring at S=0 is P=1 for starting node. N = 5 (nodes) For S = 0 (initial state) 1 - P = 0.5 P = 0.5 Counter-clockwise Clockwise +-----+ | P = | +----------+ 1.0 +----------+ | +-----+ | +--+--+ +--+--+ | 0.0 | | 0.0 | +--+--+ +--+--+ | | | | | +-----+ +-----+ | +--+ 0.0 +---------+ 0.0 +--+ +-----+ +-----+ Node probablities for the same 5 node ring after 2 state transitions N = 5 (nodes) S = 2 1 - P = 0.5 P = 0.5 Counter-clockwise Clockwise +-----+ | P = | +----------+ 0.5 +-------------+ | +-----+ | +--+--+ +--+--+ | 0.0 | | 0.0 | +--+--+ +--+--+ | | | | | +------+ +-------+ | +--+ 0.25 +---------+ 0.25 +--+ +------+ +-------+

Let’s first write the sequential version of the algorithm to calculate the ring probabilities. The parallel version will be handled in the next post. Data types in Elixir are pretty basic at this point with Elixir still having not reached 1.0. I decided to use an array to represent the ring in anticipation of later parallelizing the algorithm. A list seemed unsuitable for this due to access times being linear and that a parallel map across the structure would most likely be required. For a sequential version it’s interesting that a list is the fastest data structure to use in combination with recursion and pattern matching but I’ll get into that in the next post.

For now let’s get back to implementing a sequential version with an array and the map function …

Elixir doesn’t have an array or vector type (yet ?). I’m not going to comment on this. Instead we will use Erlang’s array type. Dipping into Erlang libraries from Elixir is pretty trivial so it’s no big deal other than Elixir’s parameter conventions for function calls is the reverse of Erlang’s and this can be a little annoying.

Let’s look at the function for calculating the node probabilities given a direction change probability, number of nodes and state count :

def calc_ring_probs(p, n, s) when is_float(p) and p >= 0 and p <= 1 and is_integer(n) and n > 0 and is_integer(s) and s >= 0 do # Probs at S = 0. # Certain that we are positioned at only start location. # e.g. P(Start Node) = 1 initial_probs = :array.new [size: n, fixed: true, default: 0.0] initial_probs = :array.set 0, 1.0, initial_probs final_probs = initial_probs IO.puts "Calculating ring node probabilities where P=#{p} N=#{n} S=#{s} ...\n" # If we are moving beyond the initial state then do the calculation ... if s > 0 do # ... through all the states ... final_probs = reduce 1..s, initial_probs, fn (_, new_probs) -> calc_state_probs(p, new_probs) end end final_probs end

The first thing you might notice at the beginning of calc_ring_probs are the guard clauses (when …) after the function parameter definition. This is a nice way of ensuring some pre-conditions are met for the function to return meaningful results.

We check that the probability parameter is a float within the range 0.0 -> 1.0, we also make sure that there are more than zero nodes and this must be an integer and that the state is either zero or more and also an integer.

Next the initial probabilities are created using an Erlang array. If the state required is not the initial state (S=0) then we reduce for the number of states and calculate the probabilities of the ring at each state(calc_state_probs) until we reach the final state.

Now let’s look at the implementation of calc_state_probs.

def calc_state_probs(p, prev_probs) when is_float(p) and p >= 0 and p <= 1 do sz = :array.size(prev_probs) :array.map fn(i, _) -> prev_i = if i == 0 do sz - 1 else i - 1 end prev_prob = :array.get(prev_i, prev_probs) next_i = if i == sz - 1 do 0 else i + 1 end next_prob = :array.get(next_i, prev_probs) p * prev_prob + (1 - p) * next_prob end, prev_probs end

The function takes the probability P and an array of the previous probabilities. We determine the previous and next node probability indexes based on the current index. If the current index is the first or last index in the array then the previous index is the last and the next index is the first, respectively. We calculate the current index using the respective probability P or 1-P and the previous and next node probabilities.

That’s really all there is to the sequential version.

On a macbook air a 1,000,000 node ring over 10 state changes takes ~7.4 seconds.

bash-3.2$ mix run lib/ring_probs.ex 0.5 1000000 10 Calculating ring node probabilities where P=0.5 N=1000000 S=10 ... {:array, 50, 0, 0.0, {{0.24609375, 0.0, 0.205078125, 0.0, 0.1171875, 0.0, 0.0439453125, 0.0, 0.009765625, 0.0}, {9.765625e-4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, 10, 10, 10, 10, 10, 10}} ... 999950 node probabilities ... {:array, 999950, 0, 0.0, {{{{{{0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {9.765625e-4, 0.0, 0.009765625, 0.0, 0.0439453125, 0.0, 0.1171875, 0.0, 0.205078125, 0.0}, 10, 10, 10, 10, 10, 10}, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100}, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000}, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000}, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000}} calc time: 7370.12 msecs

The complete code can be found here.

My reluctance with Elixir is that it’s a strong dynamically typed language. This is much the same issue I’ve had with Erlang. There are ways to work around this. One way is using a static analysis tool. Read this for more info. Apparently success types are a way to correctly infer types in Erlang. I can’t say that I’m convinced, my own experience has shown that any production system that needs to scale at least requires something like type-hinting. I might be wrong and in fact I hope I am because I like what I’ve seen regarding Elixir and heard about the Erlang VM for building distributed systems.

In the next post we’ll re-write the code to make it run concurrently and also look at how a sequential version of the algorithm using recursion, pattern matching and lists is an order of magnitude faster than the sequential version using arrays in this post.

The sequential recursive version may even be faster than a concurrent version depending on how many cores your machine has ;-)

# All Watched Over by Machines of Loving Grace

I can’t say that I agree with all of the conclusions in this series but the perspectives are very interesting food for thought. The Silicon Valley idea of technology holding some intrinsic automatic redeeming value for humans is something that is arguably destructive for society. The outright commodification of human emotion and the mediation of human communication through social media giants is not by default great for society as the technology utopians would have us believe. Also, whether or not technology actually levels the playing field is highly debatable. Love and Power truly are never going away and can never be controlled whether by elite individuals or large groups of every day utopians. Nevertheless this is part of the human drama that many seem to need and that some endlessly crave.

**All Watched Over by Machines of Loving Grace** is a 2011 BBC documentary series by filmmaker Adam Curtis. The series argues that computers have failed to liberate humanity and instead have “distorted and simplified our view of the world around us”. The title is taken from the 1967 poem of the same name by Richard Brautigan. The first of three episodes aired on Monday 23 May 2011 at 9pm on BBC2.^{
}

Part 2: The Use and Abuse of Vegetational Concepts

Part 3: The Monkey In The Machine and the Machine in the Monkey

# Clojure’s growing popular mind share

Popular interest in Clojure has rapidly increased over the last few years since 2008, almost to the level of Java (the language) today, which has dropped off significantly. (At least according to Google web trends.)

In contrast, popular interest in Common Lisp seems to have dropped off steadily since 2004.

I used “java language” instead of “java” because it is ambiguous enough to mean the language, framework, JVM, the island or the coffee.

# Corporate funding for Shen

It looks like it might be coming sooner than I thought. I’m sure Shenturions everywhere will find this news incredibly exciting for the future of Shen. I can’t wait to see how things progress.

# OS X 10.9 “Sea Lion” finally supports OpenCL on Intel HD 4000/5000

# O-notation considered harmful (use Analytic Combinatorics instead)

For so long I’ve been skeptical about the classic approach of the “Theory of Algorithms” and its misuse and misunderstanding by many software engineers and programmers. Big *O* notation, the Big Theta Θ and Big Omega Ω notations are often not useful for comparing the performance of algorithms in practice. They are often not even useful for classifying algorithms. They are useful for determining theoretical limits of an algorithms’ performance. In other words, their theoretical lower bound, upper bound or both.

I’ve had to painfully and carefully argue this point a few times as an interviewee and many times as part of a team of engineers. In the first case it can mean the difference between impressing the interviewer or missing out on a great career opportunity due simply to ignorance and/or incorrigibility of the person interviewing you. In the latter it could mean wasted months or even years in implementation effort and/or a financial failure in the worst case.

In practice the O-notation approach to algorithmic analysis can often be quite misleading. Quick Sort vs. Merge Sort is a great example. Quick Sort is classified as time quadratic O(*n*²) and Merge Sort as time log-linear O(*n* log *n*) according to O-notation. In practice however, Quick Sort often performs twice as fast as Merge Sort and is also far more space efficient. As many folks know this has to do with the typical inputs of these algorithms in practice. Most engineers I know would still argue that Merge Sort is a better solution and apparently Robert has had the same argumentative response even though he is an expert in the field. In the lecture he kindly says the following : *“… Such people usually don’t program much and shouldn’t be recommending what practitioners do”*.

There are many more numerous examples of where practical application does not align with the use of O-notation. Also, detailed analysis of algorithmic performance just takes too long to be useful in practice most of the time. So what other options do we have ?

There is a better way. An emerging science called “Analytic Combinatorics” pioneered by Robert Sedgewick and the late Philippe Flajolet over the past 30 years with the first (and only) text appearing in 2009 called Analytic Combinatorics. This approach is based on the scientific method and provides an accurate and more efficient way to determine the performance of algorithms(and classify them correctly). It even makes it possible to reason about an algorithms’ performance based on real-world input. It also allows for the generation of random data for a particular structure or structures, among other benefits.

For an introduction by the same authors there is An Introduction to the Analysis of Algorithms(or the free PDF version)and Sedgewick’s video course. Just to make it clear how important this new approach is going to be to computer science (and other sciences), here’s what another CS pioneer has to say :

*“[Sedgewick and Flajolet] are not only worldwide leaders of the field, they also are masters of exposition. I am sure that every serious computer scientist will find this book rewarding in many ways.” —*From the Foreword by Donald E. Knuth

# Purely Functional Data Structures & Algorithms : Selection Sort

***Updated @ 2012-08-31 02:08:58 due to internet pedantry***

According to Wikipedia :

In computer science, a

Selection sortis a sorting algorithm, specifically an in-place comparison sort. It has O(n^{2}) time complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.

(A functional implementation of selection sort is however, **not** an in-place sort.)

Behold the abomination which is the imperative implementation (from the Wikipedia link) :

int i,j; int iMin; for (j = 0; j < n-1; j++) { iMin = j; for ( i = j+1; i < n; i++) { if (a[i] < a[iMin]) { iMin = i; } } if ( iMin != j ) { swap(a[j], a[iMin]); } }

Now, the functional equivalent in Haskell :

selectionSort :: [Int] -> [Int] -> [Int] selectionSort sorted [] = reverse sorted selectionSort sorted unsorted = selectionSort (min:sorted) (delete min unsorted) where min = minimum unsorted

Or in Shen :

(define selection-sort-aux { (list number) --> (list number) --> (list number) } Sorted [] -> (reverse Sorted) Sorted Unsorted -> (let Min (minimum Unsorted) (selection-sort-aux (cons Min Sorted) (remove-first Min Unsorted))))

Yes. These functional snippets use their respective implementations of the list type (which is not an efficient persistent data type in either Haskell or Shen for accesses or updates). Replacing the List type with Data.Sequence(a persistent data type with efficient constant access and update) for the Haskell snippet is trivial. I’ll leave that as an exercise to the reader. Shen is too new to support these efficient persistent types at the moment but implementations will appear in the future and changing the snippet would also be trivial. A Clojure implementation using it’s already built in efficient persistent types would also be trivial.

The complete code can be found here.