This is my Blog.

It's full of opinions on a wide range of stuff.

Continuous Delivery for Scala with TravisCI

For many engineers - regardless of language - all the plumbing needed for setting up reliable continuous delivery (continuous release) is either tedious, or out of reach as the cost vs benefit equation just doesn't make sense. For Scala users this frustration is often compounded because SBT is, for many users, black magic they simply don't understand. Build setup and commands are often cargo cult'ed from one build to the next "because that's what worked last time". This is an unacceptable state of affairs.

A better way

If for no other reason that to reduce the burden on my team when it comes to open sourcing our software infrastructure, I have consolidated our open source builds into a straight forward and simple to use plugin that does all the things most users would want for releasing open source projects. This plugin is sbt-rig. Rig has the following goals:

  1. All development should be done in the open and use Git SCM, and builds should take place on Travis (or internally hosted enterprise Travis instances). The use of Travis is important, because it mandates all build definitions are checked in, and that credentials are properly encrypted.
  2. All pull requests should be built and tested, complete with code coverage to inform the reviewer in the best possible manner as to if this PR should merge or not.
  3. Merging code to the master branch actually means something: any updates to master are considered release-worthy and code is automatically built and released as such.
  4. All rebel leases should be tagged in Git and their artifacts should hosted on maven central.
  5. sbt-rig should not interfere with anything outside of these bounds in a mandatory way. Users are free to configure their projects however they wish, and still gain this workflow. Likewise, sbt-rig should not interfere with any additional release or deployment workflow steps.
  6. Users should not be manually setting the patch version number. Provided the version number is monotonically increasing, its actual version doesn't matter. Users can focus on major and minor semantics, but patch numbers themselves don't matter as the build should be both binary and semantically compatible (a la semver).

With these goals defined, lets move on to how one actually uses sbt-rig.


The setup of sbt-rig in your project is trivial, but there are a few steps of administration that you'll have to go through first in order to publish to maven central:

  1. Register your profile with maven central. This allows you to push to a given TLD, for example, my profile allows me to write artifacts for com.timperrett and all subordinate domains.

  2. You'll need to setup a gpg key and public ring. You'll have to publish the public part of your key pair as well.

Thankfully these operations are a one-time thing, so its only tedious once :-D Be 100% sure to retain your credentials for maven central in a super safe place - you'll need them for subsequent steps, shortly.

Project Setup

First and foremost, ensure that your project/ file has the latest version of SBT configured:


Next, simply add the sbt-rig plugin by adding the following lines to your project/plugins.sbt:

addSbtPlugin("" % "sbt-rig" % "1.1.20")

With that done, you need to decide what license you want to release your open source project as. If you're not sure, i'd recommend you head over to and do some reading. Personally, i'd recommend Apache 2.0, but you should use a license that makes sense for your company, team and project - seek legal advice if you are not sure what makes the most sense. Once you've decided on a license, you need to set some metadata about your project such that maven central constraints for POM files are satisfied. I typically do this as an autoplugin within a given project (this is not common, because sbt-rig does not want to make assumptions about the legal nature of your projects, or who contributes to them).

In project/CentralRequirementsPlugin.scala you can add the following:

package myproject

import sbt._, Keys._
import xerial.sbt.Sonatype.autoImport.sonatypeProfileName

object CentralRequirementsPlugin extends AutoPlugin {

  override def trigger = allRequirements

  override def requires = RigPlugin

  override lazy val projectSettings = Seq(
    sonatypeProfileName := "com.yourcompany.myproject",
    pomExtra in Global := {
          <name>Your Name Goes Here</name>
        <!-- add other developers as needed -->
    licenses := Seq("Apache-2.0" -> url("")),
    homepage := Some(url("")),
    scmInfo := Some(ScmInfo(url(""),

I know this might look a bit daunting - but I assure its not. All we've done here is let maven central know who it was developing this project, what license we are going to be releasing under (I strongly urge you to also place a LICENSE file in the root of your repository with the complete legal text of the chosen license), and also where users can both clone the source code, and where they can read more about the project (for the latter, I recommend github pages, but you're free to host that where ever you want - future versions of sbt-rig will make using github pages stupidly simple).

Travis Setup

Configuring Travis is also straight forward and obvious, but if you're not familiar, then I recommend reading the getting started documentation from travis themselves. Once familiar and you have enabled builds on your project, you are going to want a .travis.yml file that looks something like this:

language: scala
  - 2.11.7

  - oraclejdk8

  - master

  - "if [ $TRAVIS_PULL_REQUEST = 'false' ]; then git checkout -qf $TRAVIS_BRANCH; fi"

  - |
    if [ $TRAVIS_PULL_REQUEST = 'false' ]; then
      sbt ++$TRAVIS_SCALA_VERSION 'release with-defaults'
      sbt sbt ++$TRAVIS_SCALA_VERSION test
  - find $HOME/.sbt -name "*.lock" | xargs rm
  - find $HOME/.ivy2 -name "ivydata-*.properties" | xargs rm

    - $HOME/.ivy2/cache
    - $HOME/.sbt/boot/scala-$TRAVIS_SCALA_VERSION

  - find $HOME/.sbt -name "*.lock" | xargs rm
  - find $HOME/.ivy2 -name "ivydata-*.properties" | xargs rm

# we'll go over these missing values in a second.
    - secure: "......"

For the most part, this is going to be super boilerplate for most projects and will barely change. There are however, a few key points to note:

  1. This will enable Travis to build pull requests and submit feedback to Github
  2. This build only targets a single version of Scala - we'll cover more advanced uses in the next section.
  3. The after_success and cache blocks will dramatically speed up your build. These instruct Travis to cache all the common JARs used by your project, stuff them in S3 and then download them before any subsequent builds, which avoids re-bootstrapping the ivy environment (a.k.a downloading the internet).
  4. The branches section is important, as this prevents travis going into a build-loop and triggering builds for tags. If you do not do this, you will inadvertently find yourself DoS'ing the build server.

The main thing that is missing from this definition - and must be added once per-repository are the credentials for publishing to maven central. As-is, sbt-rig is expecting these to be set into the build environment, and to do this safely we can leverage travis' support of variable encryption. You'll need to this twice:

travis encrypt --add -r yourorg/yourrepo SONATYPE_USERNAME=xxxxxxx
travis encrypt --add -r yourorg/yourrepo SONATYPE_PASSWORD=zzzzzzz

These values are encrypted with a special RSA key (added by Travis to you repo) that is repository specific, and the private key is held by Travis, so essentially these are one-way encrypted from the perspective of the user, and now the secure values can be added to the env block of the build file (--add to the travis command is just a shortcut for this; feel free to do it manually if you want to preserve formatting).

That's all there is too it for a single scala version build. The next section covers some extra credit items: cross-building multiple versions of Scala, cross building along arbitrary matrices, and adding code-coverage reporting.

Advanced Builds

Given that Scala is not binary compatible between feature versions, a really common use case is to build for both 2.10 and 2.11. To do this, simply set multiple versions of Scala in your .travis.yml:

language: scala
  - 2.10.5
  - 2.11.7

sbt-rig will automatically do the right thing, and create staging repositories on maven central for each discrete build job (meaning one build does not stomp on the other).

Another common, but much trickier use case is publishing multiple library versions that contain different dependencies and compile different sources, based on either a particular library version or language version (and combinations therein). For this, we can leverage the Travis build matrix feature. This is an extremely powerful tool, but one that is perfect for this job. Consider the following real example from the verizon/knobs project:

language: scala

    - jdk: oraclejdk8
      scala: 2.10.5
      env: SCALAZ_STREAM_VERSION=0.7.3a
    - jdk: oraclejdk8
      scala: 2.10.5
      env: SCALAZ_STREAM_VERSION=0.8.1a
    - jdk: oraclejdk8
      scala: 2.11.8
      env: SCALAZ_STREAM_VERSION=0.8.1a
    - jdk: oraclejdk8
      scala: 2.11.8
      env: SCALAZ_STREAM_VERSION=0.7.3a

In this case, we need to build the library for both Scala 2.10, Scala 2.11 and then both of those language versions with different versions of Scalaz Stream (such that they rely on different and incompatible versions of Scalaz). By using the matrix in this manner, we can get all the combinations we need, and if needed, even exclude particular combinations to create a sparse matrix. Naturally this does not work right out of the box, because SBT needs to know which library versions to include. To do this, we simply embed a small AutoPlugin in the project directory of our repository:

package myproject

import sbt._, Keys._

object CrossLibraryPlugin extends AutoPlugin {

  object autoImport {
    val scalazStreamVersion = settingKey[String]("scalaz-stream version")

  import autoImport._

  override def trigger = allRequirements

  override def requires = RigPlugin

  override lazy val projectSettings = Seq(
    scalazStreamVersion := {
      sys.env.get("SCALAZ_STREAM_VERSION").getOrElse("0.7.3a") // "0.8.1a" "0.7.3a"
    unmanagedSourceDirectories in Compile += 
      (sourceDirectory in Compile).value / s"scalaz-stream-${scalazStreamVersion.value.take(3)}",
    version := {
      val suffix = if(scalazStreamVersion.value.startsWith("0.7")) "" else "a"
      val versionValue = version.value
        versionValue.replaceAll("-SNAPSHOT", s"$suffix-SNAPSHOT")
      else s"$versionValue$suffix"

As is a community convention within the Scalaz ecosystem, we then end up with artifacts that have an a happened to the version. Naturally, you can make this work with any set of matrix vectors you want, simply by adding a new AutoPlugin that does the particular behavior you need - hopefully this gives you a sufficient template to customize it for your needs. AutoPlugin plus build matrices are awesome and can do pretty much anything.

Enjoy, fellow Scala users.

Footnote about Travis stability

After initially posting this article, I received questions about the instability of travis and how to handle flakey builds when sporadically fails. That is a legitimate question, and one that doesn't have an awesome answer. For our company open source projects, we do all the development, conversation, testing and such all in the open, and then we have a process that dynamically syncs the public repositories into our Github Enterprise, at which point our internal Travis Enterprise conducts the release build - clearly this is a heavy hammer, but given our internal systems are the same as the external systems this simply allows the release process to be deterministic, as our dedicated hardware is a lot more stable than the public travis-ci (where stability is critical for automated release builds when doing this kind of release strategy). In order to mitigate (where you don't have your own build infra), you can either pay TravisCI for improved hosted service, or you can split-up your build such that you're not consuming 4gb of RAM during testing or similar... most of the time builds are only killed because they ran out of space on their shared (containerized) build infrastructure. Hope that helps!

Leave a Comment

Optimizing SBT resolution times

SBT offers a powerful system for customizing and controlling your build. However, it has one major niggle: its dependency manager is based on Apache Ivy. This is can be both a blessing or a curse, depending on which day of day of the week it happens to be. In many Scala project builds - regardless of if its on, on-premise Jenkins or otherwise - a large portion of the overall build time is spent in doing dependency resolution. This is further aggravated when builds used Ivy's dynamic revision feature, as updates need to be calculated for each build, regardless of the Ivy local cache. In addition to these issues, many projects make use of lots of dependencies, and in todays world of several competing artifact delivery systems (sonatatype, maven central, bintray etc), resolving every artifact at every remote in an exhaustive fashion can consume a large quantity of time. With all this in mind, i've arrived at the following recipe for speeding your builds.

I'm going to assume that you're running your build system on-premise, and that you have control over the build environment (regardless of if its Jenkins, Travis Enterprise, Bamboo or whatever).

  • First and foremost, you need to run a local Nexus installation on the same network as your build system (Artifactory will also work, but Nexus is free, so we'll assume that for the purpose of this article). Running it on the same network as the build system is key, as we're trying to reduce latency for artifact and metadata XML transfer.

  • Once your Nexus install is setup, you're going to need to do a little digging through your builds - as many as it takes to get most of your repository locations - and then you're going to add each one of these remote locations as a remote proxy repository. In short, this means the Nexus install you created is going to act as a local cache for any artifacts that your builds need to download, meaning the build process (i.e. Ivy) will only have to look at your local Nexus, and it won't have to fetch anything over the internet.

  • What i've mentioned so far is pretty common practice - particularly for any larger system installation. There is one hitch though: any build that defines an external resolver will still cause Ivy to fetch to it for artifacts in an exhaustive fashion, so be sure to update every SBT build definition to only point to the domain of your local Nexus installation. After doing that, your builds would have increased in speed by quite a fair bit. Great! However, your builds are still free to add external resolvers anytime they like, which gets you right back to square one, and relying on the whole engineering team to not add new resolvers can be extremely brittle. In order to fix this, you'll need to add a custom repositories file to the SBT configuration - located at $HOME/.sbt/repositories by default. The contents of the file should be:

  nexus-ivy-proxy-releases: http://YOURDOMAIN/nexus/content/groups/internal-ivy/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
  nexus-maven-proxy-releases: http://YOURDOMAIN/nexus/content/groups/internal/
  • The previous point is pretty well documented. What is not well documented is that one must also tell SBT to restrict resolver locations to that which is defined in this file (which it will not do by default, as it will still use and friends). Assuming you're using the sbt-extras script there are three ways to ensure that the repositories file is authoritative (and restrictive) about where the build fetches artifacts from:
  1. Define a .sbtopts file in the root of your project, and add as the contents.
  2. Define an environment variable $SBT_OPTS that contains
  3. Finally you can propagate via the -sbt-opts command line option if that works for you. However, I typically prefer the aforementioned options for a build environment, as then users don't have to care in their build scripts.
  • With these fixes in place, your build times should drop off a cliff. I've seen 5-10x decreases in build times since implementing this.

Hope this helps someone and reduces your build times in the same way it reduced mine.

Leave a Comment

Frameworks are fundamentally broken

I've been thinking about writing this post for a while - several years in fact - but its reached a point where I have to get this out of my head and onto the screen: Frameworks are the worst thing to happen to software development in the last decade (and possibly ever).

For the purpose of this article, I shall define a Software Framework as this: one or more pieces of software that are designed to work in tight unison, with the aim of smoothing / easing / hastening / or otherwise "improving" the development of a given application development cycle in a particular domain. The software in question is typically bundled together and binary modules of the project are typically not used outside the intended framework usage or the framework itself. Examples of software frameworks include AngularJS, Play!, Ruby on Rails etc. At this point in my software engineering career, I have used a wide range of software frameworks and have been involved in writing more than one, and I even wrote a book about Lift. With this frame good reader, please appreciate that one does not come to such a decision to criticise frameworks as a programming paradigm lightly. The following sub-sections outline what I see as the primary issues that make frameworks fundamentally flawed.

Lack of Powerful Abstraction

Business domains are often inherently complex, and this impacts the engineering that needs to take place to solve a problem within that business domain in a very fundamental way. In this regard, frameworks tend to be intrinsically limiting because they were written by another human-being without your exact, complex business requirements in mind - you are programming inside someone else's constraints and technical trade-offs. More often than not, those trade-offs are not documented explicitly or encoded formally, which means users encounter these limitations through trail-and-error usage in the field.

Many framework authors take the approach that they are solving a general problem in a given engineering sector (web development, messaging, etc), but typically they end up solving the problem(s) at hand in a monolithic way. Specifically, authors have a "outside in" approach to design, where they allow "plugin points" for users of the framework to write their own application logic... the canonical example here can be found in MVC-style web framework controllers. In all but the most trivial applications, this is a totally broken approach as one often observes users either writing all their domain logic directly in the controller (i.e. inside the framework constraints), or alternatively, observing parts of the domain logic or behaviour "leaking" into the controller. Whilst it could be argued that this is simply an education problem with users, I would disagree and argue that it takes a high-degree of discipline from users to do the right thing... The easy thing is most certainly not the right thing. Instead of the root cause being an education issue, I would propose that a fundamental problem exists with the mindset of the frameworks themselves - which often encourage this kind of poor user behaviour - in short, frameworks do not compose. Frameworks make composition of system components difficult or impossible, and without composition of system components there can never be any truly powerful abstraction... which is absolutely required to build reasonable systems. To clarify, the lack of composability exists both in the micro and macro levels of a system; components should plug together like lego bricks, irrespective of which lego pack those bricks came from (here's hoping you follow that tenuous analogy, good reader). Users don't wish to extend some magic class and be beholden to some bullshit class hierarchy and all its effects; users wish to provide a function that is opaque to the caller, provided the types line up, naturally. When frameworks do not do this, its a fundamental issue with the design of these software tools.

An obvious supposition might be that these kinds of monolithic, uncomposable designs occur because framework authors are trying to optimise for certain cases - more often than not, a case high on the list to satisfy is to make the framework "easy" to get started with. An interesting side-effect of this is that authors usually assume that users won't know too much about what they are using and that the code they write needs to be minimal. Whilst i'm all for writing less code, assuming that users won't know how to use framework APIs only applies when the system is not based on any formal or well-known abstractions. The ramification of this lack of formalisation is two fold:

  1. Enhanced burden on the framework author(s) as the lack of formalisation requires them to "teach" the user how to do everything from scratch. In practice this means writing more documentation, more tests and examples and more time spent on the community mailing lists helping users - ad infinitum.

  2. Users have to invest their time fairly aggressively in a technology without truly understanding it. This typically means getting up to speed with all the framework-specific terminology (e.g. "bean factory", "interceptor", "router") and programming idioms. As an interesting side-note, I believe this aggressive investment without understanding is actually what gives rise to a lot of "fanboism" in technology communities at large: people get invested quickly and feel the need to evangelise to others simply because they invested so much time themselves, and subsequently need to ensure that the tool they selected gains critical mass and long-term viability / credibility... that is no doubt a subject for another article though.

Let's consider for a moment what would happen if a framework component were implemented in terms of a formally known concept... For example, if one knows that a given component is a Functor, then one can immediately know how to reason about the operations and semantics of that component because they are encoded formally as a part of the functor laws. This immediately frees framework authors and users from the burdens listed in points one and two above. However, what if framework users don't know what a Functor is? and they are not familiar with the relevant laws? Well, there is no denying that to learn many of the formal constructs will require effort on the part of users, but critically, what they learn is fundimentally useful when it comes to reasoning about problems in any domain. This is wildly more beneficial than learning how to operate in one particular solution space inside one particular framework. They will have learnt something fundamental about the nature of solving problems - something that will serve them well for the rest of their careers. Let me qualify that with an example:

Concepts such as Functor should not be scary. Many engineers in our industry suffer from a kind of dualism where theory and practice are somehow separate, and formal concepts like Functor, Monad and Applicative (to name a few) are often considered to "not work in practice", and users of such academic vernacular are accused of being ivory tower elitists. Another possible explanation might be that engineers (regardless of their training: formal or otherwise) are simply unaware of the amazing things that have been discovered in computer science to date, and proceed to poorly reinvent the wheel many times over. In fact, I would wager that nearly every hard problem the majority of engineers will encounter in the field has had its generalised case be the subject of at least one study or paper... the tools we need already exist; its our job as good computer scientists to research our own field, and edify ourselves on historical discoveries and make best advantage of the work done by those who went before us.

Short-term Gain

All software is optimised for something; sometimes its raw performance, sometimes its type-checked APIs and sometimes its other things entirely. Whatever your tools are optimised for, some trade-offs have been made to achieve said optimisation. Many, many frameworks usually include phrases like these listed below:

  • "Increased productivity"
  • "Get started quickly!"
  • "Includes everything you need!" for XYZ domain

These kinds of benefits usually indicate the software is optimised for short-term gain. Users are hooked on the initial experience building "TODO" applications. More often than not, these users then later become frustrated when they hit a wall where the framework cannot do exactly what the business needs, and they have to spend time wading through the framework internals to figure out a gnarly work-around to solve their particular problem.

The real irony here is that optimising for the initial experience is such a wildly huge failure: the majority of engineers will not spend their time writing new applications, rather, they will be maintaining existing applications and having to - in many cases - reason about code that was not written by them. On large software projects, there are usually a myriad of technologies being employed to deliver the overall product, and having each and every software tool have similar concepts with different names and annoying edge cases is frankly untenable. Once again, the lack of formalisation or composability causes havoc in many areas (lest we forget taking the time to figure out work arounds is usually painful and time-expensive).

Community fragmentation

For the vast majority of frameworks, they usually have a particular coding or API style, or a set of conventions users need to know in order to produce something - disastrously, these conventions are often not enforced or encoded formally with types. Whilst these conventions are probably obvious for authors of the framework, it makes moving from one framework to another a total mind-fuck for users - essentially giving users (and ergo, companies) a vendor lock-in long-term. Whilst vender lock-in is clearly undesirable, there is another more important aspect: frameworks create social islands in our programming communities. How many StackOverflow questions have you seen with a title along the lines of "what's the Rails way to do XYZ operation?", or "How does AngularJS do ABC internally?". Software is written by people, for people, and it must always be consumed in that frame. Fragmenting a given language community with various ways to achieve the same thing (with no formal laws) just creates groups with arbitrary divisions that really make no sense; these dividing lines usually end up being taste in API design, or familiarity with a given practice.

Whilst the argument could be made that branching, competing and later merging of software projects is beneficial, when it comes to the people and the soft elements related to a technical project, the mental fallout from the fork/compete/merge cycle is extremely heavy and usually the merge process never occurs (if it does, it usually takes years). Moreover, if a given framework community island fails, its incredibly hard on the engineers involved. I have both experienced this personally, and witnessed it happening in multiple other communities - which is a worrying trend (again, lots of material for a later post to lament about that).

Looking forward

If the current trend in software development continues: certainly. It's imperative to understand the need for composability in our software tools is an absolute requirement, if we as an industry have any hope of not repeating ourselves time and time again.

Leave a Comment

Scalaz Task - the missing documentation

Before starting, let us consider the semantics of Future[A] for a moment: the monadic context being provided is that of asynchronous processing: the enclosed A may or may not exist yet, and if it doesn't exist, the computation to produce A is either not completed or it has failed. This is all very reasonable in isolation. Now then, the problem with this is that what one will often observe - especially with relative newcomers to functional programming - is that this "failed" state of Future gets abused to provide some semantic meaning within the given application code. Typically, if one was working with a computation that could fail at a higher level application layer, one might model it as Throwable \/ A in a pure function, such that said computation could then be lifted into some asynchronous context simply by wrapping it in a future: Future[Throwable \/ A]. Whilst much cleaner, this usually provides a secondary complication in that one is then stacking monads all over the application code, and you end up needing monad transformers to declutter things and make it reasonable to work with. The downside here of course is that monad transformers (such as scalaz.EitherT come with a layer of cognitive difficulty that many engineers find problematic to overcome).

In this frame, Scalaz brings scalaz.concurrent.Task to the table to try and make asynchronous operations that might have application-layer failures easier to work with, without the need for monad transformers, and still maintaining all the useful Scalaz typeclass utilities and combinators. In addition to just being a convenience data type, its important to understand the Task is a trampolined construct which means that its not using up stack in the same way a regular chain of functions would (this is an important point and becomes relevant later)

With the preamble out of the way, the next section describes the set of public combinators you will likely be using with Task, and gives examples of how to get started.

Task Combinators and Usage

There are a range of useful functions for task, some on the companion object and others on instances themselves.

Object functions

Typically speaking, when using Task one does not use the constructor to create new instances - whilst that is possible, at that point the user code needs to guarantee the exception safety of the supplied Future[Throwable \/ A], so its typically just easier to use one of the combinators below:

  • Task.apply: if you have a computation that you want to evaluate asynchronously, use apply. This is much like scala.concurrent.Future.apply, in that it takes an implicit ExecutorService reference for which the computation will later be run with. I say "later be run with", as, unlike the std library Future computations are not run at the moment they are called, they are lazy. Consider the following example:
scala> Task { println("foo") }
res0: scalaz.concurrent.Task[Unit] = scalaz.concurrent.Task@22147686

// dangerous - this is as bad as Option.get, included for completeness.

scala> res0.attemptRun
res2: scalaz.\/[Throwable,Unit] = \/-(())

In this example, note that Task.apply was used to suck the println operation into a task, which then just got allocated into res0. Nothing actually happens until calling one of run, attemptRun (which are both blocking) or runAsync (which is as the name suggests, is asynchronous). This is a really important detail, as your program won't even execute if you don't call one of these methods.

  • if you have a strict value that you simply want to lift into a Task[A], then is your friend. Example:
scala> import scalaz.concurrent.Task
import scalaz.concurrent.Task

scala> val i: Task[Int] =
i: scalaz.concurrent.Task[Int] = scalaz.concurrent.Task@6f721236
  • imagine you have a Throwable instance from some function that you want to lift into a Task, then you can use Failed tasks can also be chained together in a convenient fashion using the or combinator:
scala> import scalaz.concurrent.Task
import scalaz.concurrent.Task

scala> val i: Task[Int] = Exception("example")) or
i: scalaz.concurrent.Task[Int] = scalaz.concurrent.Task@4266d38d

res0: Int = 1

scala> Exception("boom")).run
java.lang.Exception: boom
    at .<init>(<console>:9)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  • Task.delay: this function might look similar to in terms of its type signature, but its semantic is very different. Consider the implementation: suspend(now(a)). You can see that it first lifts the value into a task using, and then Task.suspend's the resulting task (see below for description of suspend)

  • Task.suspend: suspend is a kind of interesting beast in that in that its purpose is to execute the supplied f: => Task[A] in a new trampolined loop. As mentioned in the introduction, Task is a trampolined structure, so this concept of suspending a computation is all to do where the computation is trampolined too. Use cases for this are when you would like to create a Task in a lazy fashion, which can often be useful when you have different types of recursive functions.

  • Task.fork: Given the supplied task, ensure that it runs on a separate logical thread when it comes to executing this task. How exactly this is executed at the end of the world depends on the supplied ExecutorService.

  • Task.async: Sometimes when working with callback-based APIs you really wish that you had a reasonable monadic API, and the Task.async function can help with this. Here's an example of wrapping Java's AsyncHttpClient using the async combinator below, which typically would give you code like this:

asyncHttp.prepareGet("", new AsyncCompletionHandler[Response] {
  def onComplete(r: Response) = ...
  def onError(e: Throwable) = ...

However, this can be transformed into a convenient monadic API using Task.async:

def get(s: String) = 
  Task.async(k => asyncHttp.prepareGet(s, toHandler(k)))

def toHandler[A](k: (Throwable \/ A)) = new AsyncCompletionHandler[A] {
  def onComplete(r: Response) = k(...)
  def onError(e: Throwable) = k(...)

// usage:
get("").flatMap(response => ...)
  • Task.gatherUnordered: More often than not, you may have a sequence of Task or have a situation where you would like tasks to be executed in parralell. In this case, gatherUnordered is exactly what you want. Here's an example:
scala> import scalaz._, Scalaz._
import scalaz._
import Scalaz._

scala> import scalaz.concurrent.Task
import scalaz.concurrent.Task

scala> val tasks = (1 |->  5).map(n => Task { Thread.sleep(100); n })
tasks: List[scalaz.concurrent.Task[Int]] = List(scalaz.concurrent.Task@22fef829, scalaz.concurrent.Task@4e707a3f, scalaz.concurrent.Task@3b3e8bb0, scalaz.concurrent.Task@4db4e786, scalaz.concurrent.Task@18de71a9)

scala> Task.gatherUnordered(tasks).run
res0: List[Int] = List(1, 3, 4, 2, 5)

Notice that when run the tasks are all executed in parallel, and the resulting list of Ints are not always ordered properly. This is wildly useful in all manner of circumstances, and for folks still thinking about the Scala std library Future, you can think of this as being similar to Future.traverse or Future.sequence.

  • Task.reduceUnordered: In Scalaz 7.0 gatherUnordered was the only available way to run multiple tasks and collect results, and its output collection was concretely List[A]. As there are plenty of other cases where you would want to return some other type constructor rather than List, in these cases you can use reduceUnordered. This is best illustrated by the updated implementation of gatherUnordered:
reduceUnordered[A, List[A]](tasks, exceptionCancels)

Pretty simply, with the output M being explicitly specified as List, this resolves a Reducer[List] from the companion object of Reducer, where Reducer is a convenience for composing operations on arbitrary monoids. See the implementation for more details as its a little out of scope for this blog post.

  • Task.unsafeStart: Given that some users struggle with the delta between the runtime semantics of Task[A] and the Future[A] in the standard library, those users can choose to use unsafeStart. When used, this function will take a thread from the relevant ExecutorService and run the enclosing function immediately. This function is only available in Scalaz 7.1+, and should rarely, if ever, be used with extreme care..

These are the functions available for creating Task instances, which are accompanied by a range convenient functions on the instances themselves.

Instance Functions

  • onFinish: upon completion of this task the supplied function Option[Throwable] => Task[Unit] will be executed. One could use this for cleaning up resources or executing some scheduled side-effect.

  • timed: if you want your task to take no longer than a specified amount of time, then the timed function is for you. Here's an example:

scala> Task { Thread.sleep(1000); "one" }.timed(100)
res3: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@5c85a402

    at scalaz.concurrent.Future$$anonfun$timed$1$$anon$
    at java.util.concurrent.Executors$
  • handleWith: if you want to explicitly handle or maybe even ignore certain types of application error, you can use the handleWith function to alter the contents of your task based on the received error:
scala> val exe = Task(10 / 0)
exe: scalaz.concurrent.Task[Int] = scalaz.concurrent.Task@5fb90bbb

scala> exe.handleWith {
     | case e: ArithmeticException =>
     | }
res10: scalaz.concurrent.Task[Int] = scalaz.concurrent.Task@14b7c7d0

res11: Int = 0

A word on Nodeterminism[M[_]]

In Scalaz 7.1, some useful functions were also added to the Nodeterminism abstraction, specifically related to the parallel execution of tasks. Consider the following example:

scala> import scalaz.concurrent.Task, scalaz.Nondeterminism
import scalaz.concurrent.Task
import scalaz.Nondeterminism

scala> val sb = new StringBuffer
sb: StringBuffer =

scala> val t1 = Task.fork { Thread.sleep(1000); sb.append("a") ;"a") }
t1: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@539d15a4

scala> val t2 = Task.fork { Thread.sleep(800); sb.append("b") ;"b") }
t2: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@42e69075

scala> val t3 = Task.fork { Thread.sleep(200); sb.append("c") ;"c") }
t3: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@1e0467b1

scala> val t4 = Task.fork { Thread.sleep(400); sb.append("d") ;"d") }
t4: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@3f7da7fe

scala> val t5 = Task.fork { sb.append("e") ;"e") }
t5: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@1206a4db

scala> val t6 = Task.fork { Thread.sleep(600); sb.append("f") ;"f") }
t6: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@55470723

scala> val r = Nondeterminism[Task].nmap6(t1, t2, t3, t4, t5, t6)(List(_,_,_,_,_,_))
r: scalaz.concurrent.Task[List[String]] = scalaz.concurrent.Task@5ae9139

res0: List[String] = List(a, b, c, d, e, f)

You might be wondering why this is even useful to use Nondeterminism over the plain Task combinators? Well, sometimes its useful to abstract over a given input (much like you would do generally), and only care about the certain operations that you might do with a task, without specifically being bound to Task explicitly.

Task Concurrency

Whilst the world of task is generally quite lovely, there are some important implementation details to be aware of when getting a handle on what runs on which thread pools when actually executing something. Let's look at some examples to try and explain the various cases (the following example use run just for simplicity and to obviate things if you want to try them in the REPL):


The above takes the task and runs the computation on thread pool p. This is probably the simplest case. If you do not supply p as an implicit or explicit argument then by default Task uses scalaz.concurrent.Strategy.DefaultExecutorService, which is a fixed thread pool which dynamically fixes its upper bound to the number of processors in the host machine.


Which then expands too:

)(DefaultExecutorService).flatMap(x => x).run

In this case the evaluation of the expression Task(e)(p) will occur on DefaultExecutorService and the evaluation of the expression e will occur on p.


Which then expands too:


This is the other way around, where Task(example(arg)) will end up being evaluated on p, but example(arg) will be evaluated on DefaultExecutorService.

You might be wondering how on earth one can then get a computation to run on precisely the pool you want, without having to constantly pass around the explicit reference to the desired pool. For this you can use scalaz.Kleisli, and "inject" the reference to the executor. Consider the following:

scala> import java.util.concurrent.{ExecutorService,Executors}
import java.util.concurrent.{ExecutorService, Executors}

scala> import scalaz.concurrent.Task
import scalaz.concurrent.Task

scala> import scalaz.Kleisli
import scalaz.Kleisli

scala> type Delegated[+A] = Kleisli[Task, ExecutorService, A]
defined type alias Delegated

scala> def delegate: Delegated[ExecutorService] = Kleisli(e =>
delegate: Delegated[java.util.concurrent.ExecutorService]

scala> implicit def delegateTaskToPool[A](a: Task[A]): Delegated[A] = Kleisli(x => a)
delegateTaskToPool: [A](a: scalaz.concurrent.Task[A])Delegated[A]

scala> val exe = for {
     |   p <- delegate
     |   b <- Task("x")(p)
     |   c <- Task("y")(p)
     | } yield c
exe: scalaz.Kleisli[scalaz.concurrent.Task,java.util.concurrent.ExecutorService,String] = scalaz.KleisliFunctions$$anon$17@1774bf5c

res0: scalaz.concurrent.Task[String] = scalaz.concurrent.Task@252cffe7

res1: String = y

This gives us a simple way to paramatise a set of functions to execute on the same thread pool if required.

Leave a Comment

Enabling rpmbuild on Mac OSX

When I'm working on a project for deployment, typically I want to create an RPM and see how well my packaging code did when it was putting all the things inside that RPM, and importantly that they are in the right places inside said RPM. I must stress, that I have no intention to actually deploy this RPM - I just want it to make development more convenient (all real deployable binaries are created by a CI system).

By default, Mac OSX is not able to build RPM files, which is a bit of a bummer. This can be quickly remedied by using HomeBrew to install rpm and rpmbuild:

$ brew install rpm
$ brew install rpmbuild

By default, the rpm command is useful for doing certain things like introspecting RPM file contents etc:

// listing the contents of the rpm
$ rpm -qlp <yourfile>.rpm

// list all possible metadata options that can be queried
$ rpm --querytags

// querying parts of the metadata (where ${x} is one of the values from --querytags)
$ rpm -qp <yourfile>.rpm --qf "%{DESCRIPTION}"

However, rpmbuild is a bit more pesky and requires a little bit of wrangling as certain paths still look in default linux locations when trying to build an RPM (at least the brew bottle installed as of early 2014). As such, this can be fixed with a simple symbolic link:

$ sudo ln -s /usr/local/lib/rpm/ /usr/lib/rpm

Then the rpmbuild process should work for you on your mac! Once again I must stress, this is NOT for deployment, and I cannot recommend you try to use these RPMs - I just find this a useful debugging tool from a packaging perspective.

Hope this saves someone else the time.

Leave a Comment

Understanding the State Monad

After recently tweeting about trampolines in Scala, I was subsequently asked about the state monad, as the paper from the tweet talks about implementing State in terms of Free. So then, it seemed only reasonable to write up a quick post about how to use the "normal" State[S,A] monad in Scalaz.

For the sake of this post, we will be implementing a process that is no doubt familiar to anyone who has ever driven, ran, cycled or otherwise on a highway: the ubiquitous traffic light - using the UK signalling semantics, obviously. These traffic signals are just rotating through a set of finite states: red to amber, amber to green and so forth. In this way, the light has a given state, and it can move between certain states but not others, and if the light malfunctions it needs to revert to flashing red lights to act as a stop sign. This could of course be implemented using mutable state to keep track of the currently enabled lights etc, but that would suck. The scalaz.State monad allows us to model these state transitions in an immutable fashion, and transparently - but intuitively - thread that state through the computation. Let's get cracking!

Algebraic data types: we gots' em'

So, first up, let's construct a super simple ADT set that models the various parts of a traffic light: colour aspects, modes of operations, and the states of a given signal display set:

sealed trait Aspect
case object Green extends Aspect
case object Amber extends Aspect
case object Red   extends Aspect

sealed trait Mode
case object Off      extends Mode
case object Flashing extends Mode
case object Solid    extends Mode

// represents the actual display set: must be enabled before 
// it can be used. 
case class Signal(
    isOperational: Boolean, 
    display: Map[Aspect, Mode])

object Signal {
  import scalaz.syntax.state._
  import scalaz.State, State._

  // just a lil' bit of sugar to use later on.
  type ->[A,B] = (A,B)

  // convenience alias as all state ops here will deal
  // with signal state.
  type SignalState[A] = State[Signal,A]

  // dysfunctional lights revert to their flashing 
  // red lights to act as a stop sign to keep folks safe.
  val default = Signal(
    isOperational = false, 
    display = Map(Red -> Flashing, Amber -> Off, Green -> Off))


This is the shell of the work right here, so now we just need to get to the meat of the matter and implement the state transitioning functions.

Enabling Signals

Let's start by assuming that the signal is entirely disabled, so we need to make it operational:

def enable: State[Signal, Boolean] = 
  for {
    a <- init
    _ <- modify((s: Signal) => s.copy(isOperational = true))
    r <- get
  } yield r.isOperational

This block of code firstly (i.e. the first generator of the comprehension) initialises the State. It's purpose in life is simply that: no more and no less - it gives you the "context" for the stateful computation, so to speak. This can easily be comprehended by looking at the definition in the Scalaz source:

// State.scala in scalaz
def init[S]: State[S, S] = State(s => (s, s))

It just lifts the S into the State constructor. Simple. The next line in the comprehension is the one doing the actual work of modifying the state, which in this case is a Signal instance. As you can see, its just accepting a function that takes a given Signal and sets the isOperational value. Internally, the State monad machinery applies this function to the given Signal instance that is being operated on. This example is of course trivial, but i will show later why this is even useful. Finally, the get method is also from State, and is defined to read the current state, whatever that might be:

// State.scala in scalaz
def get[S]: State[S, S] = init

Hopefully that was straight forward - essentially the State monad just provides us a way to read / set / replace some value of some application state, that we don't currently have, and then later read said state without ever directly "having" an instance of that data type. Nifty, eh?

The astute reader will be thinking about higher-order functions right about now.

Flashing lights all around

The guts of this particular example: changing the light configuration. In order to not over complicate (I assume real-life is not this trivial), the change method just takes arguments of tuples, which are then applied to to a given Signal instance using the display method, which in turn just stuffs that into the signal.display value (I rely on it being a map, and that each aspect in a light is unique - again, this will not every conceivable case, but it serves the example)

def change(seq: Aspect -> Mode*): State[Signal, Map[Aspect, Mode]] = 
  for {
    m <- init
    _ <- modify(display(seq))
    s <- get
  } yield s.display

// FIXME: requires domain logic to prevent invalid state changes
// or apply any other domain rules that might be needed. 
// I leave that as an exercise for the reader.
def display(seq: Seq[Aspect -> Mode]): Signal => Signal = signal => 
    signal.copy(display = signal.display ++ seq.toMap)
  else default

So this is our "primitive" for changing lights, but it will be an awkward user API if they have to constantly pass in the exact combination of the entire world of lights, so let's make their lives easy by making some combinators of the possible operations.

Combinators FTW

There are four common states that a UK traffic light might be in:

  • All stop: solid red
  • Get Ready: solid green + solid amber
  • Go: solid green
  • Prepare to stop: solid amber

Let's make combinators that use the change method under the hood:

// common states the signal can be in.
def halt  = change(Red -> Solid, Amber -> Off,   Green -> Off)
def ready = change(Red -> Solid, Amber -> Solid, Green -> Off)
def go    = change(Red -> Off,   Amber -> Off,   Green -> Solid)
def slow  = change(Red -> Off,   Amber -> Solid, Green -> Off)

Awesome. And putting that all together in a small application that we can run:

def main(args: Array[String]): Unit = {
  import Signal._
  import scalaz.State.{get => current}

  val program = for {
    _  <- enable
    r0 <- current // debuggin
    _  <- halt
    r1 <- current // debuggin
    _  <- ready 
    r2 <- current // debuggin
    _  <- go
    r3 <- current // debuggin
    _  <- slow
    r4 <- current
  } yield r0 :: r1 :: r2 :: r3 :: r4 :: Nil

  program.eval(default).zipWithIndex.foreach { case (v,i) =>
    println(s"r$i - $v")

Which when you run it, will print to the console each intermediate step (which of course you wouldn't do in practice, but i do here simply to illustrate the resulting alteration occurring at each step of the contextual state provided by the State monad):

r0 - Signal(true,Map(Red -> Flashing, Amber -> Off, Green -> Off))
r1 - Signal(true,Map(Red -> Solid, Amber -> Off, Green -> Off))
r2 - Signal(true,Map(Red -> Solid, Amber -> Solid, Green -> Off))
r3 - Signal(true,Map(Red -> Off, Amber -> Off, Green -> Solid))
r4 - Signal(true,Map(Red -> Off, Amber -> Solid, Green -> Off))

And that's all there is too it - state monad is really powerful, and is so general it can be applied to many cases where you might otherwise use mutation.

All the code for this article can be found on Github

Leave a Comment

Free Monads, Part One

This post is the first of a series; the next article will cover constructing an application using the CoProduct of Free monads (essentially putting the lego blocks together)

When I was growing up, my grandmother used to tell me "There's nothing new under the sun […]". As a child, this seemed like an odd thing to say: clearly new things were invented all the time, so this couldn't be right, could it?… Of course, as we get older, one realises that indeed many "new" things are just the same thing repacked, polished or slightly altered but they are fundamentally the same thing, and my grandmother was indeed correct: rare is the occasion there's anything truly new to the world. This experience is one i'm sure is shared by many people as they grow up, and i'd like to draw an interesting parallel here with Functional Programming (FP): and over the past years I have repeatedly had these eureka moments; realising that something I was solving had indeed already solved many moons before by someone else - one simply had not made the connection between the abstraction and the problem. Today was one of these days.

Like many engineers, I am gainfully employed to build large systems that feature an abundant array of non-trivial business logic, and which subsequently have many moving parts to deliver the end solution. The complexity aspect of these moving parts has always bothered me, and over time I had sought out a range of different abstractions to try and alleviate the building of such applications. However, all these solutions pretty much suck, or have some aspect of jankyness, and testing can frequently be a problem, as despite best effort, things can often become awkwardly coupled as a codebase evolves and requirements shift under you.

With this frame, recently I have been investigating Free monads, and my-my, what a delightfully powerful generic abstraction these things are! In this post I will be covering how to implement the much-loved task of logging in terms of scalaz.Free.

Domain Algebra

Before we dive into any specifics about Free, we should first consider the operations necessary for the domain you want to implement, a.k.a the domain algebra. In the case of logging, the domain is of course very small, but it should be familiar to many folks:

// needs to be covarient because of scalaz.Free 7.0.4; 
// in the 7.1 series its no longer covariant - thanks Lars!
sealed trait LogF[+A] 
object Logging {
  case class Debug[A](msg: String, o: A) extends LogF[A]
  case class Info[A](msg: String, o: A) extends LogF[A]
  case class Warn[A](msg: String, o: A) extends LogF[A]
  case class Error[A](msg: String, o: A) extends LogF[A]

As you can see, our "domain" simply involves the different levels of log messages, DEBUG through ERROR. The purpose here is to model every single operation in that domain as an ADT. This essentially the command concept in CQRS, which is just another name for algebra (I use this analogy as perhaps more people are familiar with CQRS). Let's look at the details a little more closely:

sealed trait LogF[+A] 

The LogF trait in this example really does nothing at all; it just serves as the "top level" marker, which we shortly provide a Functor for (hence being called, LogF)

case class Debug[A](msg: String, o: A) extends LogF[A]

The algebra itself needs to extend LogF and take all the arguments required to execute that domain operation (in this case, a single String to print to the output, but you can imagine having a higher number of parameters to actually do something more useful). As for the o: A, this is a vehicle to make the Free abstraction work - in essence, it is the "next computation step", and we can wire that in by virtue of LogF having a Functor, like so:

implicit def logFFunctor[B]: Functor[LogF] = new Functor[LogF]{
  def map[A,B](fa: LogF[A])(f: A => B): LogF[B] = 
    fa match {
      case Debug(msg,a) => Debug(msg,f(a))
      case Info(msg,a)  => Info(msg,f(a))
      case Warn(msg,a)  => Warn(msg,f(a))
      case Error(msg,a) => Error(msg,f(a))

As you can see, all this Functor instance does is take the incoming ADT and apply the function f to the A argument, which allows us to thread the computation through the ADT in a very general fashion.

So this is our domain algebra - right now this is nothing more than a definition of possible operations; it is totally inert, so we need some way to interpret the possible operations, and actually do something about them; this brings us neatly onto interpreters.


In the domain of logging, the content to be logged is totally disjoint from what is done with that content, for example, perhaps we want to use SLF4J in production, but println whilst we're developing, or perhaps we just want the flexibility to decide later how we should actually do the logging. When designing your system in terms of domain algebra and Free, this becomes trivial, as you simply need to provide a different interpreter implementation that uses whatever implementation you fancy. Let's look at an implementation that uses println:

object Println {
  import Logging._
  import scalaz.{~>,Id}, Id.Id

  private def write(prefix: String, msg: String): Unit = 
    println(s"[$prefix] $msg")

  private def debug(msg: String): Unit = write("DEBUG", msg)
  private def info(msg: String): Unit  = write("INFO", msg)
  private def warn(msg: String): Unit  = write("WARN", msg)
  private def error(msg: String): Unit = write("ERROR", msg)

  private val exe: LogF ~> Id = new (LogF ~> Id) {
    def apply[B](l: LogF[B]): B = l match { 
      case Debug(msg,a) => { debug(msg); a } 
      case Info(msg,a) => { info(msg); a } 
      case Warn(msg,a) => { warn(msg); a } 
      case Error(msg,a) => { error(msg); a } 

  def apply[A](log: Log[A]): A = 

For the most part, this should be really straightforward to read as all its doing is providing some small part of code that actually does the work of printing to the console. The part that that is of interest is the def apply[A](log: Log[A]): A method, as this is where the awesome is taking place. Notice that the argument is of type Log[A]. Until now, we have not defined this, so let's add a definition and explain it:

type Log[A] = Free[LogF, A]

So Log is just a type-alias for a Free monad on the LogF functor we defined earlier. This sounds a lot worse than it is; but in essence it just means that Log[A] is actually any constructor of Free, of which there are two options:

  • Suspend - the intuition here is "stop the computation and hand control to the caller".
  • Return - and similarly, "i'm done with my computation, here's the resulting value"

So, with this in mind, assuming there is a Log[A] passed in, Free defines the method runM which will recursively execute the free until reaching the Return (essentially flatMap that shit all the way down, so to speak). In order for this to happen, the caller needs to supply a function S[Free[S, A]] => M[Free[S, A]], or more specifically in terms of this example: LogF[Free[LogF, A]] => Id[Free[LogF, A]], and this is exactly the purpose of the exe value - it takes the domain algebra and executes the appropriate function in the interpreter and "threads" the A through the computation, simply by returning it in this case (as the logging is a side-effect).

Now you have the algebra for the domain, and a way to interpret that, let's add some syntax sugar so that this stuff is conveniently usable in an application.


It would be nice if the API would look something like:

object Main {
  import Logging.log

  val program: Free[LogF, Unit] = 
    for {
      a <-"fooo")
      b <- log.error("OH NOES")
    } yield b

  def main(args: Array[String]): Unit = {

Well it turns out that we can conveniently achieve this by lifting the LogF instance into Free, by virtue of the LogF being a Functor… sweet!

implicit def logFToFree[A](logf: LogF[A]): Free[LogF,A] = 
  Suspend[LogF, A](Functor[LogF].map(logf)(a => Return[LogF, A](a))) 

Then we can simply define some convenient usage methods and make the A that we are threading through a Unit, as the act of printing to the console has no usable result.

object log {
  def debug(msg: String): Free[LogF, Unit] = Debug(msg, ())
  def info(msg: String): Free[LogF, Unit]  = Info(msg, ())
  def warn(msg: String): Free[LogF, Unit]  = Warn(msg, ())
  def error(msg: String): Free[LogF, Unit] = Error(msg, ())

Critically, using Unit here is simply a product of having no usable value - if we wanted to make a "logger" that was entirely pure and only dumped its output to the console at the end of the application, we could simply write an interpreter that accumulated the content to log in a List[String]!

With the sugar defined, an algebra and an interpreter, all that's left is to run execute the main :-)

> run

You will then see the output in the console as executed by the println calls.

Replacing the Interpreter

Let's say that we later decided that using println did not provide sufficient control, and instead we wanted to use SLF4J, then one could easily implement another interpreter that sent the content to that different backend. Here's an example:

object SLF4J {
  import Logging._
  import org.slf4j.{Logger,LoggerFactory}

  private val log = LoggerFactory.getLogger(SLF4J.getClass)

  private val exe: LogF ~> Id = new (LogF ~> Id) {
    def apply[B](l: LogF[B]): B = l match { 
      case Debug(msg,a) => { log.debug(msg); a } 
      case Info(msg,a) => {; a } 
      case Warn(msg,a) => { log.warn(msg); a } 
      case Error(msg,a) => { log.error(msg); a } 

  def apply[A](log: Log[A]): A = 

As you can see, the only difference is the SL4J plumbing is encapsulated within the interpreter, but absolutely nothing has changed about how the program will be defined - only how it is actually actioned changes. The actual application main then just becomes:

def main(args: Array[String]): Unit = {

You can find all the code for this post over on Github.

Leave a Comment

Unboxed new types within Scalaz7

Some time ago I started investigating the latest (and as yet unreleased) version of Scalaz. Version 7.x is markedly different to version 6.x; utilising a totally different design that makes a distinct split between core abstractions and the syntax to work with said abstractions. In any case, thats fodder for another post; the bottom line is that Scalaz7 is really, really slick - I like the look of it a lot and I feel like theres a lot to be learnt by simply studying the codebase and throwing around the abstractions therein (this may be less true for haskell gurus, but for mere mortals like myself I've certainly found it insightful).

One of the really neat things that Scalaz7 makes use of is a very clever trick with types in order to disambiguate typeclass instances for a given type T. This was something Miles demonstrated some time ago in a gist, and I was intrigued to find it being used in Scalaz7.

So then, what the hell does all this mean you might be wondering? Well, consider a Monoid of type Int like so:

scala>import scalaz._, std.anyVal._
import scalaz._
import std.anyVal._

scala> Monoid[Int].append(10,20)
res1: Int = 30

...or with sugary syntax imported as well:

scala>import scalaz._, syntax.semigroup._, std.anyVal._
import scalaz._
import syntax.semigroup._
import std.anyVal._

scala> 10 |+| 20
res2: Int = 30

This is simple use of the Monoid type, and this example uses the default type class instance for Int, which simply sums two numbers together. But, consider another case for Int: what if you needed to multiply those same numbers instead?

If there were two typeclass instances for Monoid[Int], unless they were in separate objects, packages or jars there would be an implicit ambiguity which would not compile: the type system would not implciitly be able to determine which typeclass it should apply when considering an operation on monoids of Int.

This issue has been overcome in Scalaz7 by using the type tagging trick mentioned in the introduction. Specifically, the default behaviour for Monoid[Int] is addition, and then a second typeclass exists for Monoid[Int @@ Multiplication]. The @@ syntax probably looks a little funny, so lets look at how its used and then talk in more detail about how that works:

scala>import Tags._
import Tags._

scala> Multiplication(2) |+| Multiplication(10)
res14: scalaz.package.@@[Int,scalaz.Tags.Multiplication] = 20

You'll notice that the value was multiplied this time, giving the correct result of 20, but you may well be wondering about the resulting type signature of @@[Int,Multiplication]... this is where it gets interesting.

Multiplication just acts as a small bit of plumbing to produce a type of A, tagged with Multiplication; and this gives you the ability to define a type which is distinct from another - even if they are "the same" (so to speak). The definition of Multiplication is like so:

sealed trait Multiplication
def Multiplication[A](a: A): A @@ Multiplication = Tag[A, Multiplication](a)

object Tag {
  @inline def apply[A, T](a: A): A @@ T = a.asInstanceOf[A @@ T]

The Tag object simply tags types of A with a specified marker of T; or, in this case, Multiplication, where the result is A @@ T. I grant you, it looks weird to start with, but the definition of @@ is quite straight forward:

type @@[T, Tag] = T with Tagged[Tag]

Where Tagged in turn is a structural type:

type Tagged[T] = {type Tag = T}

The really clever thing here is that this whole setup is just at the type-level: the values are just Int in the underlying byte code. That is to say if we had something like:

def foobar(i: Int): Int @@ Multiplication = ... 

When compiled it would actually end up being:

def foobar(i: Int): Int = … 

Which is pretty awesome. This sort of strategy is obviously quite generic and there are a range of different tags within Scalaz7, including selecting first and last operands of a monoid, zipping with applicatives, conjunction and disjunctions etc. All very neat stuff!

Leave a Comment

An introduction to simpler concurrency abstractions

No matter what programming language you use to create ""the next big thing™", when it comes to running your code, there will – eventually – be a thread that has to execute and compute the result of your program. You may be wondering why you should even care about this threading lark? Well, modern computing hardware is typically not sporting faster clock speeds, but instead features multiple cores, or physical processors. If you have a single threaded program, then you cannot make use of the abundant power that modern hardware makes available; surplus cores simply sit idle and unused. For desktop computers this is less of an issue, but for server based applications, having wasted resources that you already paid for is quite an issue.

A common scenario that you are likely familiar with – either implicitly or explicitly – is for your program to run on a single thread. In this situation the program executes imperatively. For readers familiar with C or a scripting language like PHP or Ruby, this generally means the code runs top to bottom. Consider this trivial example:

// create a variable
var foo = 0

// loop and increment the var with each iteration
def doSomething = 
  for(i <- 1 to 10){ 
    foo += 1 

// check the value of foo

This kind of code, irrespective of the exact syntax, should be familiar to anyone who's ever used a mainstream programming language. When this program is run with a single thread, the result will, as one might expect, be an integer value of 10. Seemingly straightforward.

Now, reconsider what would happen if you ran this same program on two concurrently executing threads that shared the same memory space. If you've never done any multi-threaded programming, the answer to this question may not be obvious: as each thread runs its own counting loop, both thread A and thread B will be setting the value of the foo variable. This will have some wacky side-effects in that one thread will constantly be pulling the rug out from under the others feet. This is not constructive for either thread.

This rather unfortunate scenario has several "solutions" that are found in the majority of mainstream programming languages: one of these solutions is known as synchronisation. As you might have guessed from the name, the two concurrently executing threads are synchronised so that only one thread updates the foo variable at a time. In various programming languages, locks are often used as a synchronisation mechanism (along with derivatives like Semaphores and Reentrant Locks). Whilst this article isn't long enough to go into the details of all these things (and its a deep subject!), the general concept is that when a thread needs a shared resource, it locks it for its exclusive use whilst it does its business. For all the while thread A is locking, thread B (or indeed, thread n) is "blocked", that is to say thread B is waiting on thread A and cannot do any work during that time. This gets awkward. Quickly.

Hopefully this gives you a high-level appreciation for the issues associated in writing concurrent software. Even in a small application, concurrent operations could quickly become a practical nightmare (and often do), let alone in a large-scale distributed application that has both inter-process and inter-machine concurrency concerns. Writing software that makes effective and correct use of these traditional concurrency tools is very, very tricky.

Time for a re-think.

As it turns out, some clever folks realised back in the mid-sixties that manual threading and locking was a poor level of abstraction for writing concurrent software, and with that, invented the actor model. Actors essentially allow the developer to reason about concurrent operations without the need for explicit locking or thread management. In fact, Actors were largely popularised by the Erlang programming language in the mid-eighties, where Erlang actors allowed telecoms companies such as Ericsson to build highly fault-tolerant, concurrent systems, that achieve extreme levels of availability – famously achieving "nine nines": 99.9999999% annual uptime. In fact, Erlang is still highly popular in the TelCo sector even today, and many of your phone calls, SMS and Facebook chat IMs all use Erlang actors as they wind their way across the interweb.

So what is the actor model? Well, primarily actors are a mechanism for encapsulating both state and behaviour into a single, consolidated item. Each actor within an application can only see its own state, and the only way to communicate with other actors is to send immutable "messages". Unlike the earlier example in the introduction that involved shared state (the foo variable), and resulted in blocking synchronisation between threads, actors are inherently non-blocking. They are free to send messages to other actors asynchronously and continue working on something else - each actor is entirely independent.

In terms of the underlying actor implementation, actors have what is known as a "mailbox". In the same way that a postman places letters through a physical mailbox, those letters collect on top of each other one by one, with the oldest letter received being at the bottom of the pile. Actors operate a similar mechanism: when an actor instance receives a message in its mailbox, if its not doing anything else it will action the message, but if its currently busy, the message just sits there until the actor gets to it. Likewise, if an actor has no messages in its mailbox, it will consume a very small amount of system resources, and won't block any applications threads: these qualities make actors a much easier abstraction to deal with and lend themselves to writing concurrent (and often distributed) software.

Actors by example

Enough chatter, let's look at an example of using actors. The samples below make use of a toolkit called Akka, which is an actor and concurrency toolkit for both Scala, and the wider polyglot JVM ecosystem. I'll be using Scala here, which is a hybrid-functional programming language. In short that means that values are immutable, and applications are typically constructed from small building blocks (functions). Scala also sports a range of language features that allow for concise, accurate programming not available in other languages. However, the principals of these samples can be easily replicated both in imperative languages like C# or Java, and also in languages such as Erlang.

The most basic actor implementation one could make with Akka would look like this:


class MyActor extends Actor {
  def receive = {
    case i: Int => println("received an integer!")
    case _      => println("received unknown message")

This receive method (technically a partial function) defines the actors behaviour. That is to say, the actions the actor will take upon receipt of different messages. With the receiving behaviour defined, lets send this actor a message:


// boot the actor system
val system = ActorSystem("MySystem")
val sample = system.actorOf(Props[MyActor])

// send the actor message asynchronously 
sample ! 1234

Besides the bootstrapping, the ! method (called "bang") is used to indicate a fire and forget (async) message send to sample actor instance, which, as you saw from the earlier code listing will output a message to the console. Clearly there are more interesting things to do than print messages to the console window, but you get the idea. At no point did the developer have to specify any concrete details about threading, locking or other finite details. Akka allows you fine control over thread allocation if you want it, but relatively default settings will see you through to about 20 million messages a second, with Akka being able to reach 50 million messages a second with some light configuration. This is staggering performance for a single commodity server. With all that being said, this example is of course very trivial, so lets talk about what else actors (and Akka) can do…

It turns out that the conceptual model of actors is a very convenient fit for a range of problems commonly found in business, and Akka ships with a range of tools that make solving these problems in a robust, and correct way nice and simple. Whilst this blog post is too short to talk about all of Akka's features, one that is no doubt of interest to most readers is fault tolerance.

Supervisor Hierarchies

Think about the way many people write programs: how many times have you written a call to an external system, or to the database, and just assumed that it would work? Such practices are widespread, and Akka is entirely based around the opposite idea: failure is embraced, and assumed to happen. With this frame of reference one can design systems that can recover from general faults gracefully, rather than having no sensible provision for error. One of the main strategies for providing such functionality is something known as Supervisor Hierarchies. As the name suggests, actors can be "supervised" by another actor that takes action in the case of failure. There are a couple of different strategies in Akka that facilitate this structure:

  • All for one: If the supervisor is monitoring several actors, all of those actors under the supervisor are restarted in the event that one has a failure.
  • One for one: If the supervisor is monitoring multiple actors, when one actor has a failure, it will be restarted in isolation, and the other actors will be unawares that the failure occurred.

Simple enough. Let's see supervision in action:


class Example extends Actor {
  import akka.util.duration._ 
  override val supervisorStrategy = OneForOneStrategy(
    maxNrOfRetries = 10, withinTimeRange = 1 minute){
      case _: NullPointerException => Resume
      case _: Exception => Restart
  def receive = {
    case msg => ....

You can see from the listing that the structure of the actor is exactly the same as the trivial example earlier – its just augmented with a customisation of the supervisionStratagy. This customisation allows you to specify different behaviour upon different errors arising. This is exceedingly convenient, and very easy to implement.


The actor model is a simple abstraction that facilities easier development of robust, concurrent software. This article only scratches the surface of what's available, but there is a lot of cutting edge work being done right now in the area of concurrency and distributed systems. If this article has peaked your interest, there has never been a better time to get your hands dirty and try something new!

Leave a Comment

HTML5 Event Sources with Scala Spray

With the onset of HTML5 modern browsers (read: everyone but IE) implemented the W3C's Server-Sent Events feature. This essentially allows you to push events from server to client without any additional hacks like long polling, or additional protocols like Web Sockets. This is exceedingly neat as it allows you to do stream-events with a very tiny API and minimal effort on that part of the developer.

Recently i've been doing a fair bit with the Scala Spray toolkit when building HTTP services, so I thought i'd take it for a spin and see how trivial it would be to implement SSE. Turns out it was pretty simple. Here's a demo:

As mentioned in the demo video, most of the SSE tutorials you'll find online don't talk about streaming events, they just talk about having an event source HTTP resource for which the browser will consume events. The massive omission here is that if the server just closes the connection like any regular request, then you essentially have to wait for the retry period to pass before the browser will then automatically reconnect. This is not really that great as it just gives you a more convenient API for polling (how 90s!). On the one hand, if you have a low volume of events, or a bit of lag between updates doesn't hurt then it may not be an issue, but in the main i'd imagine that most people wanting a stream of events would want exactly that, a stream. This is achieved by using chunking the response server side, and the content flushed to the client with the SSE's mandatory "data:" prefix will then be operated on, giving you a handy stream of data. Of course, you'll have to close that connection at some point, but if you set the retry latency to a low value then you'll just move from stream to stream with very little lag (as per the example video).

In the broad sense, this kind of approach is useful in a wide set of circumstances to improve the user experience. I've rambled on about this before, but now with SSE there is a generic way of achieving it without needing a framework or platform that explicitly makes comet-style programming easy: its broadly accessible for all toolkits.

The code for the example application I demoed in the video can be found on github. Enjoy.

Leave a Comment

Using dust.js with Scala SBT

If you've been living under a rock for the past few weeks you might have missed LinkedIn bloging about their usage of dust.js. This idea of client-side templating is certainly a very interesting one, and something that is in all likelihood very useful for many large-scale applications currently in production. dust.js is quite nice in that it has no dependencies on any javascript library such as JQuery or YUI. This is really rather handy is it means no prescription on your style of implementation.

However, whilst the dust.js documentation has a wide range of examples demonstrating how to use the templating engine, what it does not have is any examples of how one might actually wrap it up and use it. So, with that I wanted to just write up a quick post to explain some of the things that might not be immediately obvious. Before getting to the meat of the article, it's worth bearing in mind some terminology:

  • Template: The source dust.js template

  • Compiled template: A pre-compiled version of the template; essentially some generated JavaScript that is created at build-time (it's also possible to create the JavaScript at runtime, but this is heavily discouraged)

  • Context: Each template can only be rendered with a context. More regularly we can think of this as the data that populates the template variables

In short, the developers writes a source template which is then compiled to JavaScript at build time and then rendered at runtime with a given context; which is likely a dynamic JSON payload.

When I first go going with dust.js I was unsure how to handle this build-time compilation. Frankly, the support for everything that isn't JavaScript or Node was (and is still mostly) non-existent. This was problematic as I certainly didn't want to be compiling templates on each and every request for a given HTML page. With that, I set about writing an SBT plugin for dust.js template compilation. This then allows you to have SBT build the dust.js templates into JavaScript that you can simply reference directly from your HTML (more on this later). The plugin itself takes templates it finds in src/main/dust and compiles them to the resource managed target whenever you invoke the dust task from the SBT shell. It's also super easy to redirect the compilation output to the webapp directory during development so that you can simply use the local Jetty server to tryout your app:

(resourceManaged in (Compile, DustKeys.dust)) <<= (
      sourceDirectory in Compile){
    _ / "webapp" / "js" / "view"

But I digress. For more information about the plugin, see it's README file in the github repo.

After making my own tooling for SBT to work with dust.js, I then got to thinking about how one would actually wrap the dust.js rendering system. Curiously, the examples I found online suggest making an AJAX call to fetch the context, or template data. This strikes me as quite sub-optimal as it would mean creating the following requests:

  • fetching the general html for the page in the first instance
  • another request for the dust.js runtime
  • another request for the dust template
  • and then another for the context data

All that, just to render one template and add unnecessary lag to the page load whilst the ancillary request for context data takes place (after the initial page load has already completed!). For general page rendering, this seems somewhat bizarre. That is to say, requesting context data via AJAX for say, a single page application could well be a reasonable use-case, but for general pages that render "in full", it seems silly.

Assuming you're just rendering regular page content, we can make the separation between things one would want to load and cache locally in the browser, and things that need to be dynamic. For the most part, the context is the only thing you'd want to be dynamic and the more general plumbing code for rendering would probably be reusable throughout your rendering call sites. Consider a simple sample:

The dust template (lets call it home.dust in src/main/dust):

  <h2>Hey {name}</h2>

This template will then be compiled to home.js and placed in the resource managed output location (as per your build config). The other thing to probably think about is if had several pre-compiled template files needed on a given page it would make sense to merge and minify them into a single JS file. Alternatively, if you were building a single-page application it might be nice to use a client-side loading framework like LABjs to pull in the template files lazily (i.e. as you needed them). Getting back tot eh example, and considering the aforementioned, lets assume a HTML page that loads the DOM skeleton and the static libraries needed for rendering:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
<html xmlns="" xml:lang="en" lang="en">
  <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
  <script type="text/javascript" src="js/dust.min.js"></script>
  <script type="text/javascript" src="js/view/home.js"></script>

Now, as it stands this page would display the heading "Home" and load the dust library along with the compiled template; otherwise it is essentially static. In order to render the dust template one would have to add something like the following to the page <head>:

<script type="text/javascript">
dust.render("home", { name: "Fred" }, function(err, out) {
  document.getElementById('whatever').innerHTML = out;

This simple bit of JS would render the content into the element with the "whatever" ID attribute. Now then, this will work, and prove the concept for sure. What it does not do however is provide a dynamic context, which almost certainly most users would want to do. In this way, I can see why people might see that it made sense to make an AJAX request to a pure JSON backend service to get the context dynamically, but as above, this would be a bit slow and cumbersome. Instead, I think it would probably make more sense to dynamically deliver a call to a rendering function with the desired context. Let's qualify that: when the page loads, the template has been loaded and so has the dust library, so all that is actually required is have a call site that starts the rendering. I think this need can quite nicely be served by a server-side helper function. In essence, when the page is loaded, the server generates a <script> tag. Here's an example:

  <script type="text/javascript" src="/view/home"></script>

Which in turn would deliver something like:

draw("home", { name: "Whatever" });

That is, where the draw function wraps the dust rendering system. This is just an example though as the point is that this could be any kind of JavaScript. For instance, it could be a backbone.js view or something along those lines: you deliver the rendering code along with the context. This at least seems more optimal than making a fuckton of JavaScript requests for context objects as other blogs suggest.

In the next post i'll be showing you how to combine all the latest hipster web programming tools like dust.js, CoffeeScript and Less with a Scala backend using Unfiltered to deliver the dynamic template contexts.

Leave a Comment

The Year in Photos, 2011

This is a fairly personal post, so i'm sorry if you are a regular reader of the Scala articles here - those will resume from the next post!

The Years Events

As is becoming customary, I like to try and review the year nearly past as we look forward to the delights of the next. This blog is the second in a yearly series to make a conscious effort to note what the year was made up of and recall those awesome memories made.

Alpe d'huez, France

Most years I try to get some skiing in with friends, and this year we'd decided to go to France rather than my beloved Italy. Whilst the snow could have been better, we had a great time. I took this picture at the top of the highest gondola lift…

Bathoniean Sunsets

I'm fortunate to live in a beautiful area of the UK, and spring time is quite idyllic here. As the winter thaw gives way to things growing and spouting, the sun once again starts to bathe the evenings with glorious sunsets. This is one such evening looking out over the garden.

QConn London, Scala Community Dinner

March saw QConn return to London for a whole week and bring with it a heard of geeks from all over the globe. Many of my friends were visiting for the conference and with that decided that it would be a good idea to get the Scala folks together for some food, wine and all-round good times. Having people like James and Debasish was awesome fun. Can't wait to do something similar next year during Scala days. James, we're still waiting for "scala minus minus" ;-)

California Route 1: Pacific Highway

For anyone who's been reading this blog over the past years, you may recall that last year I was stranded in mainland Europe during the volcano eruption whilst attending ScalaDays 2010. After rounding up a bunch of stranded folks for dinner, I ended up meeting Jon-Anders - making a long story short, we became good friends and one day decided that what we should do is drive the length of the pacific highway in a convertible Mustang. After a hastily organised flight to america we were all set. What ensued was an utterly hilarious time away where we had arranged nothing in advance and just took things as they came. From huge trees in Redwood country… the winding roads and stunning coastline of BigSur and southern california:

The trip was a complete riot and finishing it off with spending time at Xerox PARC was a real high for me. It was literally like walking the halls of technology fame.

Early Summer Thistles

Getting back from Cali was somewhat of a come down, but I often go for hikes in and around where I live and was out one day and noticed a small set of thistles just growing on this open exposed hill face. Pretty nice, especially with the city backdrop.

Canada, Up-state New York and Niagara Falls.

In summer I was sent to Canada for a meeting (technically just across the boarder in Upstate New York) but this was a pretty awesome experience. Having booked a "Small European Saloon Car" before traveling, me and my colleague were surprised to be presented with a 4.0 litre land cruiser. Clearly "small" has some definition in Canada that i'm not familiar with. Anyway… it was fun to drive around in a tank truck for a few days and enjoy the delights of Toronto and New York scenery:

Of course, being in the area it only made sense to swing by Niagara falls on the way back to the airport. It was my first time here and damn, it's far, far bigger than you might imagine. It really is a stunning natural spectacle.

End of Summer: Harvest Time

Back in England for the end of the summer is yet another lovely time. The UK was drenched in sunshine for the last few days (weeks?) of August which meant all the farms nearby were busy making the most harvesting their crops and preparing the land for winter. Being out on a walk I picked up this picture just after the field had been harvested:

JavaZone and M.I.T Arctic

Come September it was time once again for the most awesome conference of the yearly calendar: JavaZone! I love this conference; its always well organised, has great talks and really wicked parties. This year was no exception to that rule and proved to again be most excellent. This year we were also joined by some of our american friends, which just added to the delight. Following the conference, some of us head north from Oslo to the M.I.T Arctic lab for a few days of geekery and walking in the tundra. This was a really special highlight of the year and felt like a somewhat magical trip for everyone. Here's a small sample of what we got up too (not all pictures taken by me, credits where credit appropriate):

(photo credit: Andreas Røe)

(photo credit: Andreas Røe)

And don't forget the somewhat successful fishing in the fjords!!

It really was an amazing trip. If you've never visited Norway, you should, its a stunning country.

Lift in Action goes to production

So we're nearly at the end of the year, so I should really mention Lift in Action going to print. This was a huge, huge milestone for me personally and concluded a two year unit of work. I should also mention that during the writing of this blog post I had originally neglected to even mention the book and that a friend had to remind me to mention it… I'm sure the psychologists would have something to say about that, but anyway!...

Leave a Comment