Everyday Ada : Simple REST Service

Seattle from Puget Sound.

Ada (previously) is a time-tested, safe, secure programming software-engineering language with a 40-year record of success in mission-critical applications such as…

  • Air Traffic Management Systems
  • Desktop and Web Applications
  • Commercial Aviation
  • Banking and Financial Systems
  • Railway Transportation
  • Information Systems
  • Commercial Rockets
  • Commercial Shipboard Control Systems
  • Commercial Imaging Space Vehicles
  • Television/Entertainment Industry
  • Communication and Navigational Satellites and Receivers
  • Medical Industry
  • Data Communications
  • General Industry
  • Scientific Space Vehicles
  • Military Applications

How good is Ada though for something that most programmers might work on in their day to day ? Something like building a REST service ?
Well here’s the hello-world of a REST service in Ada.

Ada “Hello World!” REST service.

If you’ve never seen Ada code before but you have built a web-service before, it’s fairly easy to make sense of this code. Now let’s look at the performance of this code on a mid-range desktop with an AMD Ryzen Threadripper 2950X.

Benchmark results for Ada “Hello World!” REST service.

Having built a few global-scale distributed systems in various languages at Amazon and AWS, getting this kind of performance and stability (9000 tps with 300 concurrent clients @ 99.9% success rate) on a single-machine so easily without having to sacrifice safety and security is rare. Even with such a simple example.

https://www.adacore.com/about-ada

The name β€œAda” is not an acronym; it was chosen in honor of Augusta Ada Lovelace (1815-1852), a mathematician who is sometimes regarded as the world’s first programmer because of her work with Charles Babbage. She was also the daughter of the poet Lord Byron.

Raspberry Pi 4

The Raspberry Pi 4 is a leap forward not just for the Pi but for single-board computers across the board. It’s a great light-weight desktop replacement. It’s even surprised me as a viable Rust / Ada development environment.

Raspberry Pi 4 running MATE desktop

My setup includes a Raspberry Pi 4 FLIRC case which is basically a giant aluminum heat sink. This allows for a completely silent setup running at an average of 54’C. During extended code-compiles that tops out around 65’C which is more than enough cooling and completely worth the silence when compared with a fan.

The FLIRC Raspberry Pi 4 case is the best viable silent cooling option.

The Raspberry Pi 4 firmware does not yet support booting from USB. That’s coming in the future. For now the best way to get much better system performance is to use the SD card for booting only and then running the system from a high-quality USB 3.1 stick. Expect about ~10x better performance from doing this than running on an SD card.
It’s easy, take a look at this post for how to do that.

Raspberry Pi 4 specs showing microSD card for boot and USB 3.1 card for system.

Unfortunately Raspian still only provides an armv7l 32-bit release. Ubuntu MATE does support armv7lΒ (ARMv7 32-bit) andΒ arm64Β (ARMv8 64-bit) but there’s no support for RPi4 yet. It is possible to get a 64-bit os on the RPi4 by copying over the Raspian firmware using Ubuntu Server for ARM but having done it I wouldn’t say it’s worth the effort. Better to just wait for official support if you need to run a 64-bit system.

Ada, Rust and Steelman requirements

Ada and Rust are the only two pragmatic languages that are still growing in a healthy way that meet the Steelman language requirements (created by US DoD circa 1978).

Crucial in the Steelman requirements were:

  • A general, flexible design that adapts to satisfy the needs of embedded computer applications.
  • Reliability. The language should aid the design and development of reliable programs.
  • Ease of maintainability. Code should be readable and programming decisions explicit.
  • Easy to produce efficient code with. Inefficient constructs should be easily identifiable.
  • No unnecessary complexity. Semantic structure should be consistent and minimize the number of concepts.
  • Easy to implement the language specification. All features should be easy to understand.
  • Machine independence. The language shall not be bound to any hardware or OS details.
  • Complete definition. All parts of the language shall be fully and unambiguously defined.

Signal Desktop for Arm/Linux

arm+signal

As I’ve been spending a lot of time with Arm hardware lately as my primary desktop and server platform I missed using my secure messenger app of choice. I was able to cook up builds for both armv7l/armhf/GNU Linux (32-bit) and arm64/aarch64/GNU Linux (64-bit).

These are unofficial but fully working builds of Signal Desktop for Linux on Arm processors. Enjoy !

Signal Desktop armv7l/armhf/GNU Linux

Signal Desktop arm64/aarch64/GNU Linux

armv71_desktop

Two sorts with Rust

Here are some initial thoughts on Rust in the almost two years since I last looked at it along with some implementations of merge and quick sort. (These are just my opinions so please don’t panic !)

1. Cargo is awesome for managing package dependencies and building projects.
2. Rust is a very nice systems programming language that supports a functional style of programming. Much easier to work with than C/C++ with very near the same performance in many cases.
3. Strong static inferred type system !
4. The authors have not neglected the rich and varied history of programming language theory while trying to be very pragmatic about the design and usefulness of the language.

Let’s look at implementing some popular sorting algorithms. First … quick sort.
I won’t be pedantic here by explaining the quick sort algorithm but an elegant and performant implementation is ~20 LOC. Not bad for a systems level programming language.

pub fn quicksort_rec(nums: Vec) -> Vec {

    return match nums.len() {
        cnt if cnt <= 1 => nums,
        cnt => {
            let mut left = Vec::new();
            let mut right = Vec::new();
            let pivot = nums[0];
            for i in 1..cnt {
                match nums[i] {
                    num if num < pivot => left.push(num),
                    num => right.push(num),
                }
            }
            let mut left_sorted = quicksort_rec(left);
            let mut right_sorted = quicksort_rec(right);
            left_sorted.push(pivot);
            left_sorted.append(&mut right_sorted);
            return left_sorted;
        },
    };

}

An implementation of merge sort is a bit longer at ~35 LOC.

fn merge(mut left: Vec, mut right: Vec) -> Vec {
    let mut merged = Vec::new();
    while !left.is_empty() && !right.is_empty() {
        if left.last() >= right.last() {
            merged.push(left.pop().unwrap());
        } else {
            merged.push(right.pop().unwrap());
        }
    }
    while !left.is_empty() {
        merged.push(left.pop().unwrap());
    }
    while !right.is_empty() {
        merged.push(right.pop().unwrap());
    }
    merged.reverse();
    return merged;
}

pub fn mergesort_rec(nums: Vec) -> Vec {

    return match nums.len() {
        cnt if cnt <= 1 => nums,
        cnt => {
            let mut left = Vec::new();
            let mut right = Vec::new();
            let middle = cnt / 2;
            for i in (0..middle).rev() { left.push(nums[i]); }
            for i in (middle..cnt).rev() { right.push(nums[i]); }
            left = mergesort_rec(left);
            right = mergesort_rec(right);
            return merge(left, right);
        },
    };

}

Lastly, here are the timings for my very CPU under-powered laptop …

Screenshot from 2015-12-03 14:21:26

quick_sort_merge_sort

The code for the project can be found here.

Ring probabilities in F#

B1zvtamCQAAKsFv

A few months back I took a look at Elixir. More recently I’ve been exploring F# and I’m very pleased with the experience so far. Here is the ring probabilities algorithm implemented using F#. It’s unlikely that I will ever use Elixir again because having a powerful static type system provided by F# at my disposal is just too good.

let rec calcStateProbs (prob: float, i: int,
                        currProbs: float [], newProbs: float []) = 
  if i < 0 then
    newProbs
  else
    let maxIndex = currProbs.Length-1
    // Match prev, next probs based on the fact that this is a
    // ring structure.
    let (prevProb, nextProb) =
      match i with
        | i when i = maxIndex -> (currProbs.[i-1], currProbs.[0])
        | 0 -> (currProbs.[maxIndex], currProbs.[i+1])
        | _ -> (currProbs.[i-1], currProbs.[i+1])
    let newProb = prob * prevProb + (1.0 - prob) * nextProb
    Array.set newProbs i newProb
    calcStateProbs(prob, i-1, currProbs, newProbs)



let calcRingProbs parsedArgs =
  // Probs at S = 0.
  //   Make certain that we are positioned at only start location.
  //     e.g. P(Start Node) = 1
  let startProbs =
    Array.concat [ [| 1.0 |] ; [| for _ in 1 .. parsedArgs.nodes - 1 -> 0.0 |] ] 
  let endProbs =
    List.fold (fun probs _ ->
               calcStateProbs(parsedArgs.probability, probs.Length-1,
                              probs, Array.create probs.Length 0.0))
              startProbs [1..parsedArgs.states]
  endProbs

Here’s the code.
No promises this time but I may follow this sequential version up with a parallelized version.

Ring probabilities with Elixir

Elixir I’ve been hearing more about Elixir lately so I thought I’d take it for a spin.

Elixir is a functional, meta-programming aware language built on top of the Erlang VM. It is a dynamic language that focuses on tooling to leverage Erlang’s abilities to build concurrent, distributed and fault-tolerant applications with hot code upgrades.

I’ve never really spent any time with Erlang but always been curious about it and the fact that it’s one of the best kept ‘secrets’ in many startups these days. I’ve heard for years how easy it is to ‘scale out’ compared with many other languages and platforms.

Joe Armstrong, the creator of Erlang, wrote a post about Elixir in which he seemed to really like it except for some minor things. This got me even more curious so I decided to write some code that seemed like it could benefit from the features provided by Elixir for easily making suitable algorithms parallelizable.

Let’s talk about Ring probabilities. Let’s say we had a cluster of N nodes in a ring topology. We then might have some code that requires S steps to be run and each subsequent step is run on a node to the right of the previous node with some probability P.
In the initial state (S=0) the probability of some piece of code running on node A is P=1.
At the next step (S=1) the probability of the step running on a node to the right in the ring is P and the probability of the step running on a node to the left is 1-P.

Here is an example with some crude ascii diagrams to visually represent this :

Initial node probablity for 5 node ring at S=0 is P=1 for starting node.

N = 5 (nodes)
For S = 0 (initial state)

1 - P = 0.5                  P = 0.5
Counter-clockwise           Clockwise
              +-----+
              | P = |
   +----------+ 1.0 +----------+
   |          +-----+          |
+--+--+                     +--+--+
| 0.0 |                     | 0.0 |
+--+--+                     +--+--+
   |                           |
   |                           |
   |  +-----+         +-----+  |
   +--+ 0.0 +---------+ 0.0 +--+
      +-----+         +-----+

Node probablities for the same 5 node ring after 2 state transitions

N = 5 (nodes)
S = 2

1 - P = 0.5                  P = 0.5
Counter-clockwise           Clockwise
              +-----+
              | P = |
   +----------+ 0.5 +-------------+
   |          +-----+             |
+--+--+                        +--+--+
| 0.0 |                        | 0.0 |
+--+--+                        +--+--+
   |                              |
   |                              |
   |  +------+         +-------+  |
   +--+ 0.25 +---------+ 0.25  +--+
      +------+         +-------+

Let’s first write the sequential version of the algorithm to calculate the ring probabilities. The parallel version will be handled in the next post. Data types in Elixir are pretty basic at this point with Elixir still having not reached 1.0. I decided to use an array to represent the ring in anticipation of later parallelizing the algorithm. A list seemed unsuitable for this due to access times being linear and that a parallel map across the structure would most likely be required. For a sequential version it’s interesting that a list is the fastest data structure to use in combination with recursion and pattern matching but I’ll get into that in the next post.
For now let’s get back to implementing a sequential version with an array and the map function …

Elixir doesn’t have an array or vector type (yet ?). I’m not going to comment on this. Instead we will use Erlang’s array type. Dipping into Erlang libraries from Elixir is pretty trivial so it’s no big deal other than Elixir’s parameter conventions for function calls is the reverse of Erlang’s and this can be a little annoying.

Let’s look at the function for calculating the node probabilities given a direction change probability, number of nodes and state count :

  def calc_ring_probs(p, n, s)
  when is_float(p) and p >= 0 and p <= 1 and
  is_integer(n) and n > 0 and
  is_integer(s) and s >= 0 do

    # Probs at S = 0.
    #   Certain that we are positioned at only start location.
    #     e.g. P(Start Node) = 1
    initial_probs = :array.new [size: n, fixed: true, default: 0.0]
    initial_probs = :array.set 0, 1.0, initial_probs
    final_probs = initial_probs
    IO.puts "Calculating ring node probabilities where P=#{p} N=#{n} S=#{s} ...\n"

    # If we are moving beyond the initial state then do the calculation ...
    if s > 0 do
      # ... through all the states ...
      final_probs =
        reduce 1..s,
                  initial_probs,
                  fn (_, new_probs) -> calc_state_probs(p, new_probs) end
    end

    final_probs
  end

The first thing you might notice at the beginning of calc_ring_probs are the guard clauses (when …) after the function parameter definition. This is a nice way of ensuring some pre-conditions are met for the function to return meaningful results.
We check that the probability parameter is a float within the range 0.0 -> 1.0, we also make sure that there are more than zero nodes and this must be an integer and that the state is either zero or more and also an integer.
Next the initial probabilities are created using an Erlang array. If the state required is not the initial state (S=0) then we reduce for the number of states and calculate the probabilities of the ring at each state(calc_state_probs) until we reach the final state.

Now let’s look at the implementation of calc_state_probs.

  def calc_state_probs(p, prev_probs)
  when is_float(p) and p >= 0 and p <= 1 do
    sz = :array.size(prev_probs)
    :array.map fn(i, _) ->
                   prev_i = if i == 0 do sz - 1 else i - 1 end
                   prev_prob = :array.get(prev_i, prev_probs)
                   next_i = if i == sz - 1 do 0 else i + 1 end
                   next_prob = :array.get(next_i, prev_probs)
                   p * prev_prob + (1 - p) * next_prob
               end, prev_probs
  end

The function takes the probability P and an array of the previous probabilities. We determine the previous and next node probability indexes based on the current index. If the current index is the first or last index in the array then the previous index is the last and the next index is the first, respectively. We calculate the current index using the respective probability P or 1-P and the previous and next node probabilities.

That’s really all there is to the sequential version.

On a macbook air a 1,000,000 node ring over 10 state changes takes ~7.4 seconds.

bash-3.2$ mix run lib/ring_probs.ex 0.5 1000000 10
Calculating ring node probabilities where P=0.5 N=1000000 S=10 ...

{:array, 50, 0, 0.0,
 {{0.24609375, 0.0, 0.205078125, 0.0, 0.1171875, 0.0, 0.0439453125, 0.0,
   0.009765625, 0.0},
  {9.765625e-4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
  {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
  {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
  {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, 10, 10, 10, 10, 10, 10}}
... 999950 node probabilities ...
{:array, 999950, 0, 0.0,
 {{{{{{0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
      {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
      {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
      {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0},
      {9.765625e-4, 0.0, 0.009765625, 0.0, 0.0439453125, 0.0, 0.1171875, 0.0,
       0.205078125, 0.0}, 10, 10, 10, 10, 10, 10}, 100, 100, 100, 100, 100, 100,
     100, 100, 100, 100}, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
    1000}, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
   10000}, 100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000,
  100000, 100000}}

calc time: 7370.12 msecs

The complete code can be found here.

My reluctance with Elixir is that it’s a strong dynamically typed language. This is much the same issue I’ve had with Erlang. There are ways to work around this. One way is using a static analysis tool. Read this for more info. Apparently success types are a way to correctly infer types in Erlang. I can’t say that I’m convinced, my own experience has shown that any production system that needs to scale at least requires something like type-hinting. I might be wrong and in fact I hope I am because I like what I’ve seen regarding Elixir and heard about the Erlang VM for building distributed systems.

In the next post we’ll re-write the code to make it run concurrently and also look at how a sequential version of the algorithm using recursion, pattern matching and lists is an order of magnitude faster than the sequential version using arrays in this post.
The sequential recursive version may even be faster than a concurrent version depending on how many cores your machine has ;-)

Clojure’s growing popular mind share

Popular interest in Clojure has rapidly increased over the last few years since 2008, almost to the level of Java (the language) today, which has dropped off significantly.Β (At least according to Google web trends.)
In contrast, popular interest in Common Lisp seems to have dropped off steadily since 2004.

Clojure vs. Java vs. Common Lisp

I used “java language” instead of “java” because it is ambiguous enough to mean the language, framework, JVM, the island or the coffee.

My earlier outlook on Clojure’s prospects circa 2009.