`deftype`

, for defining everything from scratch, and `defrecord`

, which adds method implementations for a couple of interfaces (some from Java, some from Clojure) that make the new type act like a Clojure map object.
But what if you want something in between? For example, to make your type a good Clojure citizen, you want it to accept metadata (a feature provided by `defrecord`

), but you don’t want all the map stuff. Or perhaps you want a map-like interface for the fields of your type, but without the possibility to extend the map with new keys. Clojure doesn’t help you out of the box; your only choice is to re-implement the required interfaces yourself, or borrow the code from Clojure’s `defrecord`

, if you are up to deciphering how it works. There is no way to *reuse* method implementations.

This also becomes a problem if you want to reuse your own method implementations. You’d need to write your methods outside of any `deftype`

, possibly in a way that allows parametrization, and then insert the code into a `deftype`

form. You might be tempted to use macros for this, but that won’t work: macros are expanded as part of the evaluation of forms, but inside a `deftype`

form, almost nothing gets evaluated. The only place where macros can be put to use inside a `deftype`

is inside the code of the individual methods.

The library methods-a-la-carte (also available at clojars) comes to your rescue. It defines a templating system, similar in spirit to syntax-quote but with some important differences, that lets you define parametrized templates for methods and sets of methods. It also defines an enhanded version of `deftype`

, called `deftype+`

, which expands such templates inside its body. Finally, it comes with a small collection of predefined method implementations, corresponding to the features of defrecord but available individually.

First, a simple example of a type that reuses just the metadata protocol implementation:

(ns example (:use [methods-a-la-carte.core :only (deftype+)]) (:use [methods-a-la-carte.implementations :only (metadata keyword-lookup)])) (deftype+ foo [field1 field2 __meta] ~@(metadata __meta)) (def a-foo (with-meta (new foo 1 2) {:a :b})) (prn (meta a-foo))

This type has two plain fields (named rather unimaginatively `field1`

and `field2`

), and a special field `__meta`

for storing the metadata. This happens to be the name that Clojure’s `defrecord`

uses for the metadata field, but this is unimportant. What *is* important is that the name begins with a double underscore, as deftype handles such fields specially: they are omitted from the constructor argument list (to the best of my knowledge this is an undocumented feature of deftype). Whatever name you choose, you have to give the same name as a parameter to the `metadata`

template.

Let’s add another feature to our type: keyword lookup:

(deftype+ foo [field1 field2 __meta] ~@(metadata __meta) ~@(keyword-lookup field1 field2)) (def a-foo (new foo 1 2)) (prn (:field1 a-foo)) (prn (:field2 a-foo))

The parameters to the template keyword-lookup are the field names for which you want keyword lookup. It can be any subset of the type’s fields.

By now you might be curious to know how the templates are defined, for example in order to define your own. Here’s the metadata template, the simplest one in the collection:

(defimpl metadata [fld] clojure.lang.IObj (meta [this#] ~fld) (withMeta [this# m#] (new ~this-type ~@(replace {'~fld 'm#} '~this-fields))))

This template has one parameter, `fld`

, naming the field that stores the metadata. Everything after the parameter list is the content of the template, with a tilde standing for expressions that are replaced by their values, just as with syntax-quote templates. Another similarity with syntax-quote is that symbols ending with # are replaced by freshly generated unique symbols.

There are two major differences between the new templating mechanism and the well-known syntax-quote:

- Symbols are not namespace-resolved. This is important because, contrary to the use of templates in macro definition, namespace resolution is not appropriate for most of the symbols in a method template (method names, method arguments, interface and protocol names).
- Symbols are not looked up in the lexical environment (there is none), but first in a dynamic environment and then in the namespace of the template definition.

The dynamic environment is initialized by `deftype+`

with the following values:

`this-type`

: the symbol naming the type being defined`this-fields`

: the vector of field names supplied to`deftype+`

The above method template used both these values in its code for `withMeta`

. Here is what the first example (type foo with just the metadata implementation) expands to:

(deftype foo [field1 field2 __meta] clojure.lang.IObj (meta [this#2515] __meta) (withMeta [this#2515 m#2516] (new foo field1 field2 m#2516)))

As with all templating mechanism, including syntax-quote, the interplay of evaluation rules, substitution rules, and quoting requires some experience before it becomes to seem natural. Be prepared for some head-scratching as you write your first templates. Simply using them should be much easier, and probably sufficient for most users. Feedback welcome!

]]>

There are a few good reasons to represent quantities by magnitude-unit pairs rather than by plain numbers:

- When quantities are represented by numbers, the units become a matter of convention, written down in a comment (if at all) rather than in the program code. This makes mistakes rather likely, with possibly serious consequences: NASA’s Mars Climate Orbiter crashed because of different units being used in different parts of the software that was used to calculate its flight trajectory.
- With just numbers, it is not even possible to verify that a quantity passed into a function has the right dimension. With an additional unit, such a check is very easy to do.
- The unit and dimension information provides additional documentation to the human reader, and aids in debugging.

A number of libraries for various programming languages therefore implement units, dimensions, and quantities, with the associated arithmetic and comparison operators and sometimes also mathematical functions. Clojure recently joined the crowd: the `units`

library is available at Clojars.org and the source code is hosted by Google Code. In this post, I describe how the library works and give a few examples.

First, a simple example for illustration. Like any Clojure script, the first thing to do is to set up the namespace with all the stuff we need:

(clojure.core/use 'nstools.ns) (ns+ unit-demo (:clone nstools.generic-math) (:from units dimension? in-units-of) (:require [units.si :as si]))

This looks rather complicated, so it deserves some explanation. We will want to be able to calculate with quantities, units, and dimensions, in particular do arithmetic (+ – * /) and comparisons ( min max) on quantities. Clojure’s built-in arithmetic and comparison functions work only on numbers, so they are not useful here. In `clojure.contrib.generic`

, there are *generic* versions of these operations, meaning that they can be defined for any datatype for which they make sense. To achieve this goal, they are implemented as multimethods, which implies some bookkeeping overhead that reduces performance. In fact, it is for performance reasons that Clojure’s standard arithmetic functions are *not* generic.

Constructing a nice namespace for generic arithmetic using Clojure’s standard namespace management tools is a bit cumbersome: we’d have to use an explicit `:refer-clojure`

clause in `ns`

in order to exclude the standard arithmetic functions, and then have a

lengthy `:use`

clause for adding the generic versions from the various submodules of `clojure.contrib.generic`

. An easier way is to use the nstools library which defines a suitable namespace template that we can simply clone. We then add the dimension-checking predicate `dimension?`

and the conversion function `in-units-of`

from the `units`

library and the shorthand `si`

for referring to the namespace that defines the SI unit system that we will use.

Now we can start doing something useful. The following function calculates the force exerted by a spring of force constant `k`

that has been compressed or extended by a displacement `x`

:

(defn spring [k] {:pre [(dimension? (/ si/force si/length) k)]} (fn [x] {:pre [(si/length? x)]} (- (* k x))))

The basic code looks just as if we had written it for use with plain numbers. The only difference are the preconditions that verify that the arguments `k`

and `x`

have the right dimensions: length for `x`

, force constant for `k`

. The test for length is simpler, because for all dimensions that have been assigned a name in the definition of the unit system, there is a direct test predicate, such as `si/length?`

. There is no predefined dimension for “force divided by length”, so we have to use the generic predicate `dimension?`

and construct the dimension arithmetically. The only operations defined on dimensions are multiplication and division, the rest (addition/subtraction, comparison) would not make sense.

Let’s use our function `spring`

:

(def a-spring (spring (/ (* 5 si/N) si/cm))) (prn (a-spring (si/cm 1/2)))

The first line defines a spring with a force constant of 5 N/cm. You can see in the expression that calculates it that units can be used like quantities in artithmetic. The unit “Newton” behaves just like the quantity “1 Newton”. However, these two values are represented differently internally, for a good reason that I will explain a bit later. The second line evaluates the force exerted by the spring when elongated by 1/2 cm. It shows another way to construct a quantity from unit an magnitude: units can be called as functions, with the magnitude supplied as the argument, returning a quantity.

The last line produces the output

#:force{-5/2 N}

The result thus has the dimension “force”, the magnitude “-5/2″ and the unit “Newton”. The dimension can be shown because it is a named dimension defined in the SI unit system. Otherwise the computer could not have guessed the name of the dimension. Let’s see what happens when we print a force constant:

(prn (/ (* 5 si/N) si/cm))

The output is

#:quantity{5 100.kg.s-2}

No dimension name, no unit name: the magnitude is 5, the unit is 100 kg/s^2, and it is expressed in SI base units plus a prefactor.

Let’s look at some more examples of unit arithmetic in the following REPL protocol:

unit-demo> (+ (si/m 1) (si/km 3)) #:length{3001 m} unit-demo> (+ (si/km 3) (si/m 1)) #:length{3001/1000 km} unit-demo> (= (+ (si/m 1) (si/km 3)) (+ (si/km 3) (si/m 1))) true

This shows how units are converted in arithmetic: the result has the unit of the first argument. However, exchanging the argument still yields a result that is equal to the original one, as indeed “1 km” and “1000 m” are the same quantity.

Next, some more complicated examples: we calculate the kinetic energy of a car:

unit-demo> (/ (si/km 100) si/h) #:velocity{100 5/18.m.s-1} unit-demo> (let [v (/ (si/km 100) si/h) m (si/kg 800)] (* 1/2 m v v)) #:energy{4000000 25/324.m2.kg.s-2} unit-demo> (let [v (/ (si/km 100) si/h) m (si/kg 800)] (in-units-of si/J (* 1/2 m v v))) #:energy{25000000/81 J}

The last line shows how to convert a quantity to a different unit. Note that the result is always equal to the input quantity, only the representation changes.

At some point, one inevitable has to communicate with the number-only world, usually for I/O, or for plotting. So how do we convert a quantity to a number? It should be clear that this operation implies the choice of a unit. The simplest solution is to divide the quantity by the desired unit: the result will be dimensionless and thus a plain number:

unit-demo> (/ (a-spring (si/cm 1/2)) si/mN) -2500

Another approach would be to convert to the desired units using `in-units-of`

and then extracting the magnitude using the function `magnitude`

from the `units`

library:

unit-demo> (units/magnitude (in-units-of si/mN (a-spring (si/cm 1/2)))) -2500

At this point it should be clear that the `units`

library defines three datatypes: dimensions, units, and quantities. It is less obvious that dimensions and units (and thus indirectly quantities) refer to a *unit system*. Without a unit system, the computer could not know that the quotient of a length and a time is a velocity, for example. Nor could it know that “Newton” is just a convenient name for “m kg/s^2″. A unit system defines *base dimensions* and *base units*. The SI system (SI = SystÃ¨me International) that is today used all over the world in science and engineering, as well as in daily life in most countries, defines seven base dimensions and associated units:

- length (meter, m)
- mass (kilogram, kg)
- time (second, s)
- electric current (ampere, A)
- temperature (kelvin, K)
- luminous intensity (candela, cd)
- amount of substance (mole, mol)

Neither the choice of these particular dimensions nor even the choice of seven base dimensions is obvious. One could very well use the electric charge instead of the electric current, for example. And one could very well not have the dimension “amount of substance” at all. The choices made for the SI system reflect the state of the art in metrology, taking into account what can and what cannot be measured with high accuracy.

All dimensions other than the base dimensions are expressed as products of powers of the base units. For example, velocity is length^1 time^-1, and volume is length^3. The SI system is constructed to make sure that all powers are integers, but this is not true e.g. for the older cgs system, which has fractional powers for dimensions related to electricity. According to the principles of dimensional analysis, a dimension is in fact nothing else but a name for a collection of powers (seven integers for the SI system). Metrological reality is a bit more complicated because there can be multiple dimensions with the same set of exponents. For example, in the SI system, both frequency (measured in cycles per second) and radioactivity (measured in decays per second) are equivalent to time^-1, because neither “cycle” nor “decay” has its own dimension. The `units`

library takes this into account and makes a distinction between frequency, radioactivity, and 1/time. The first two are not compatible with each other, meaning that you can’t add 1 Bq and 5 Hz. However, either one is compatible with 1/s, so you can add 1 Bq and 5/s. This feature requires that dimensions be represented by a specific data type; otherwise a list of exponents would be sufficient.

Units are handled much like dimensions: each base dimension has a base unit, and each non-base unit is defined as a product of powers of base units, plus a numerical prefactor. Quantities are then made up of a unit and a magnitude, which is typically a number. It is not strictly necessary to make the distinction between units and quantities, as in fact any quantity can be used as a unit. There are libraries around that use a single representation for both. However, there are two advantages to keeping the distinction:

- The
`units`

library permits magnitudes of quantities to be values of any type that implements generic arithmetic, whereas unit prefactors must be numbers. It it thus possible to use matrices as magnitudes, provided all elements have the same unit. This permits efficient implementations of many algorithms while still profiting from dimension checking and unit conversion. - Without specific unit objects, every quantity would be represented as a prefactor with respect to a product of powers of the base units. The information of what unit the quantity was initially represented in is lost. While this doesn’t matter from the point of view of dimensional analysis, it does matter from a numerical point of view. For example, quantities at the atomic scale would have very small prefactors when expressed in terms of SI base units. With magnitudes expressed as floating-point values, there is thus a risk of underflow in unit arithmetic. It is in general preferable to keep quantities in their original units and apply conversion only when requested or when inevitable (such as in addition of two quantities).

To close this brief description of the design decisions behind the `units`

library, a few words about temperatures. I have decided not to include support for temperature conversion in the initial versions of the library, and I am not sure if I will ever add it. Temperature is special in that the scales we use in daily life (nowadays mostly centigrades and Fahrenheit) have an arbitrarily chosen zero point that does not coincide with the “natural” zero point of temperature, which corresponds to the lowest possible energetic state of a system. Allowing for such units defined with an offset implies enormous complications: a distinction must be made between “differential” and “absolute” units, and arithmetic must be defined carefully to make sure that absolute units can be used only in addition with a differential unit or in subtraction. I don’t think that introducing that amount of complexity is justified, considering that daily-life temperatures are rarely combined in computations with quantities of other dimensions.

]]>

Something I didn’t like about pre- and post-conditions is the rather heavy syntax. For short functions, the conditions take up more space than the code itself. Moreover, parsing them visually takes some effort as well, as much as reading the code itself. Wouldn’t it be nice if preconditions could be written somehow right in the argument list, and postconditions at the level of the function definition?

Well, this is Lisp, so if you don’t like some syntax, you just roll your own. I didn’t come up with a better *general* syntax though, but I think that what I describe below is much nicer and suitable for 90% of pre- and post-conditions used in practice. The main limitation is that each condition can depend on only one argument, or on the return value. For the other cases, there is still Clojure’s general syntax, which is perfectly compatible with my extension. For those who want to play with this themselves, here is the code.

As a first example, here’s a pretty stupid algorithm to calculate integer powers of a number:

(defn (number?) power [(number?) x (integer?) (pos?) n] (apply * (repeat n x)))

The preconditions check that the first argument is a number and that the second one is a positive integer. The postcondition checks that the result is a a number – the utility of this test is a bit dubious, but it serves as an illustration. Note that you can have multiple conditions per argument, and also multiple postconditions. The full form representing the condition is constructed by inserting the argument to be tested in the second position of the supplied list. The above function definition actually expands to

(defn power ([x n] {:pre [(number? x) (integer? n) (pos? n)], :post [(number? %)]} (apply * (repeat n x))))

One precondition in the above example is actually too strict. The argument `n`

needn’t be positive, just non-negative. There is not simple test function for “non-negative” in Clojure, but with the above rule we can write this as:

(defn (number?) power [(number?) x (integer?) (>= 0) n] (apply * (repeat n x)))

Another possibility is to use the `->`

macro:

(defn (number?) power [(number?) x (integer?) (-> neg? not) n] (apply * (repeat n x)))

Preconditions can be combined with destructuring. Here is a variant of Clojure’s function `second`

that actually verifies that its argument has at least two element:

(defn my-second [[f & (seq) r]] (first r))

There is however one limitation: I couldn’t find a way to use my new syntax with map destructuring. So for now at least it works only with vector destructuring.

Comments on this syntax are welcome. Do you like it? Can you come up with something better? Or do you think that Clojure’s standard syntax is just fine?

]]>

Consider the following piece of code:

(defmacro foo [x] `(map #(identity %) [~x]))

At first sight, nothing looks wrong with this, other than it doesn’t do anything useful. But any use of this macro causes an error message:

(foo [:a :b]) java.lang.Exception: Can't use qualified name as parameter: user/p1__3328

What’s going on here? The error message hints at a problem with a function argument. The only function being defined is here `#(identity %)`

, which uses a shorthand notation for function literals expanded by the reader. Let’s see what the macro call expands to:

(macroexpand-1 '(foo [:a :b]))

yields

(clojure.core/map (fn* [user/p1__3328] (clojure.core/identity user/p1__3328)) [[:a :b]])

So here’s the problem: `#(identity %)`

is expanded by the reader into `(fn* [p1__3328] (identity p1__3328))`

. This is a perfectly valid function literal, but the other reader feature used here, syntax-quote, doesn’t know about function literals. It takes the expanded function literal as an arbitrary form and does namespace resolution on all symbols. This leads to a namespace-qualified symbol for a function parameter, which is not legal Clojure syntax.

Moral: use reader features sparingly, ideally one at a time. Except for syntax-quote, they only save you a couple of keystrokes, a convenience that you may end up paying a high price for in terms of debugging time.

]]>

Syntax-quote has one more effect: it resolves all symbols in the current namespace (the one in which the macro is *defined*, not the one where it is used) and replaces the unqualified symbol by its namespace-qualified equivalent. For most symbols in most forms, this is the right thing to do in order to make the macro work in any namespace, as well as to avoid unwanted variable capture. More specifically, it is the right thing to do for symbols that are defined by the macro, and for symbols that will ultimately be evaluated (names referring to vars, in particular function names). It is not the right thing to do for symbols bound locally inside the form (function parameter names, symbols bound in a let form). And it is also not the right thing to do for symbols that just stand for themselves and are used in some special way by the form that the macro expands to.

The latter situation is particularly frequent in macros that generate `deftype`

forms. Consider for example the following `deftype`

form, which is a simplified version of the type definition used in my multiarray design study:

(deftype multiarray [descriptor data-array] :as this Object (equals [o] ...) (hashCode [] ...) clojure.lang.Counted (count [] ...) clojure.lang.Indexed (nth [i] ..) clojure.lang.Sequential clojure.lang.Seqable (seq [] ...))

Of all the symbols shown in the above example, the only one for which namespace-resolution is appropriate is `multiarray`

, the name of the type being defined. All the other symbols name fields of the type, Java interfaces, or methods. They must remain unqualified. In real-life deftypes, there are of course symbols that could or should be namespace-qualified, in particular most of the symbols used inside the method definitions, which are just like function definitions. However, method definitions are often short, and rarely subject to variable capture, meaning that not namespace-resolving those symbols is rarely a problem.

In a syntax-quote template, there are two ways to deal with symbols for which the default (namespace-resolution) is not appropriate:

- Prefixing with
`~'`

(tilde + quote). This is a special case of an expression inside a template, whose value is the quoted symbol. A tilde-quoted symbol is taken over into the instantiated template without namespace-resolution. - Postfixing with
`#`

(hash sign). Such symbols are replaced with system-generated symbols that are guaranteed to be different from any other symbol in existence. This is another technique to avoid variable capture.

For generating a `deftype`

form from a syntax-quote template, the only solution is thus to prefix all the symbols shown in the example above with tilde-quote. I tried: it works, but it’s a mess. It’s not very readable, and the inevitable mistakes lead to unpleasant error messages.

Well, this is Lisp, and in Lisp you are always free to make your own tools if you are not happy with the ones provided by the system. What I want here is a template expansion system that doesn’t do namespace resolution on symbols. However, I didn’t need a full-blown equivalent to syntax-quote templates either, given that I would use those `deftype`

templates only for one application. So I came up with the following definitions, which for me are the right compromise between simplicity and useability:

(defn instantiate-template [substitution-map form] (clojure.walk/prewalk (fn [x] (if (and (sequential? x) (= (first x) 'clojure.core/unquote)) (substitution-map (second x)) x)) form)) (defmacro template [substitutions form] (let [substitution-map (into {} (map (fn [[a b]] [(list 'quote a) b]) (partition 2 substitutions)))] `(instantiate-template ~substitution-map (quote ~form))))

Compared to syntax-quote, this has two restrictions: it has no splicing, and it admits only symbols after a tilde, not arbitrary expressions. The `template`

macro takes a let-like vector as its first argument. This vector contains the symbol-value pairs for substitution inside the template. The second argument is the template form, which presumably contains tilde-prefixed symbols for substitution. Note that the Clojure reader translates `~x`

to <code (clojure.core/unquote x), which is what the above code searches for.

Here is an example for using such templates:

(defmacro foo [typename fieldname] (template [type typename field fieldname] (deftype ~type [~field]))) (foo bar boo) (bar 42)

This prints `#:bar{:boo 2}`

, illustrating that the macros does what it is expected to do. Of course this is not the perfect example for the utility of my little template instantiation system, as it could just as well be written using syntax-quote!

]]>

For those who don’t want to read all the explanations, my solution (still a bit experimental, for the moment), is in my nstools library, which is also on Clojars.

As most Clojure programmers know, a namespace maps symbols to vars. Vars are mutable storage locations with well defined concurrency semantics, but this is not the topic of this post – see the documentation for details. But a namespace is not a simple map. To start with, a namespace stores two maps: one from symbols to their values, and one from namespace aliases to namespaces. Aliases are usually created using a `(:require ... :as ...)` clause in the `ns` form that opens a namespace. They are used in namespace-qualified symbols before the slash, as a shorthand for the full namespace name. Since aliases are used before the slash and namespace-local symbols are used after the slash (or in an unqualified name with no slash at all), there is no conflict between the two. It is thus possible to use the same symbol both as an alias and as a regular symbol in the same namespace.

The main symbol-to-value map is also not quite as simple as it seems. The values it stores are not always vars. A symbol can also have a Java class as its value. A symbol-to-class entry is created using `import` or using the `:import` clause in `ns`. A submap containing only the symbol-to-class entries of the namespace map can be obtained by calling `ns-imports`. Finally, the symbol-to-var entries can be divided up into two categories: those that refer to vars in the same namespace (created by `def` and the many macros based on it), and those that refer to vars in some other namespace. The latter are created with `use` or the `:use` clause of `ns`, and the submap of these symbols can be obtained by calling `ns-refers`. The first category, a submap of symbols to vars defined in the same namespace, is the return value of `ns-interns`.

There is one more subtlety, and an undocumented one as far as I know: Two symbols, `ns` and `in-ns`, are put in the namespace map when the namespace is created, and can’t be removed (using `ns-unmap`) nor redefined. This makes sense because they refer to a macro and a function needed to create new namespaces and to switch namespaces. Having them in every namespace (referring to vars in `clojure.core`) ensures that it is always possible to get out of the current namespace.

Next, let’s look at how namespaces are set up in Clojure. Pretty much all the namespace management functionality is available through the standard `ns` form with its various clauses and options. The one exception is removing symbols, which can be done only by calling `ns-unmap` explicitly. The `ns` form first switches to the namespace it defines, creating it if necessary. The second step is to add references to all public vars defined in namespace `clojure.core`. This step can be modified by specifying a `:refer-clojure` clause that lists the symbols to include or exclude. Then `ns` goes through its optional clauses. A `:require` clause loads another namespace, but doesn’t normally modify the namespace under construction. Only if the option `:as` is specified, there is an impact on the namespace: an alias is added. A `:use` clause first does a `:require` and then adds all of the newly loaded namespace’s public vars to the symbol table of the current namespace. The options `:exclude` and `:``only` can be used to select a subset of the public vars. Finally, an `:import` clause adds Java classes to the namespace’s symbol table.

The most dangerous, but also most convenient, `ns` clause is `:use`. In its basic form, it adds all public vars of another namespace to the symbol table of the namespace under construction. And once those symbols are there, they cannot be redefined in the namespace, except by first removing them using `ns-unmap`. The problem is that “all public vars of namespace X” is not something under your control. It’s the author of the *other* namespace who decides which symbols you get in *your* namespace. The next release of namespace X may well have a few more public definitions, and if those are in conflict with your own definitions, then your module will fail to load. Therefore, as a security measure, you should use the `:``only` option of `:use` with all namespaces that are out of your control, listing explicitly the definitions that you need, in order to be certain that you don’t get more than you expect. Unfortunately, this includes `clojure.core`, which also grows with every new Clojure release. To be on the safe side, you should have a `:refer-clojure` clause with the `:``only` in every namespace that you intend to maintain for a longer time.

So far for what I have, but what do I want? I’d like to be able to set up a namespace to my taste and then be able to use it as a basis for deriving other namespaces. With that possibility, I would define a master namespace once per project, being careful to always use the `:``only` option in `:refer-clojure` and `:use`. All other namespaces in my project would then be based on this master namespace and only add or remove symbols for their specific local needs.

To implement this functionality, I added three new clauses to `ns`. The `:like` clause takes a namespace as its only argument and adds all symbols from that namespace that refer to vars in yet another namespace to the current namespace (make sure you read this properly; there are at least three namespaces involved here!). The `:clone` clause does the same but also adds the symbols defined in the other namespace. In other words, `:clone` is equivalent to `:like` followed by `:use`. The third new clause is `:remove`, whose arguments are symbols to be removed from the namespace. It is explicitly allowed to “remove” symbols that aren’t there. This creates another way to protect one’s namespace against future extensions in namespaces that are `:use`d: simply add all symbols defined in your namespace to the `:remove` list.

The above paragraph contains a small lie: I didn’t add anything to `ns`, of course, though that’s what I would have liked to do. I made a copy of `ns` and added the new clauses to the copy. The copy is in namespace `nstools.ns` and it’s called `ns+` – as explained above, I cannot call it `ns`. So to use nstools, you have to replace `ns` by `ns+` and put a `(use 'nstools.ns)` before it.

As I said, this library is still a bit experimental. I am not sure for example if both `:like` and `:clone` are necessary. And perhaps `:remove` should be called `:exclude`. Of course, any feedback is welcome!

]]>

The example I will use is stream-based I/O, based on the java.io library. Writing to such a stream has side-effects, so it is certainly not purely functional. But even reading from a stream is not purely functional: every time you call the read method, you get another return value. A stream object therefore represents mutable state. Passing around such an object among functions in a program makes it difficult to verify that the stream is read or written to properly.

Here is the basic idea of protecting state in the state monad. Suppose that the only way to create your mutable object is through a function that takes a state monad value as its argument. The function creates the object, calls the state monad value on it, and then destroys the object. Client code never obtains a reference to the object, so the only way to act on it is through state monad values that define useful operations on the object (such as “read a line from the stream”). To add another layer of protection, the object is wrapped in a protective data structure (such as a closure) before being passed to the state monad value. Only a well-defined set of state monad values gets the key to access the object, meaning that only those clearly identified operations can act on the object. You can then use those operations, and combine them in the state monad to define more complex operations. But no matter how you try, you will never get a reference to the object that you could assign to a var, pass to some function, or (ab-)use in any other way.

For stream-based I/O, this approach is implemented in the library clojure.contrib.monadic-io-streams. Before trying any of the examples below, you have to evaluate the following form that takes care of importing the libraries:

(ns monadic-io-demo (:refer-clojure :exclude (read-line print println flush)) (:use [clojure.contrib.monadic-io-streams]) (:use [clojure.contrib.monads]))

The `:refer-clojure`

clause is necessary because clojure.contrib.monadic-io-streams defines a couple of names that are also defined in clojure.core. In general this is not a good idea, but here the names are the same as those of the Java functions that are being called, which is a useful feature as well. The number of good names for functions is unfortunately not unlimited!

With the bookkeeping done, let’s look at a basic application:

(with-reader "my-file.txt" (read-line))

This returns the first line of the text file “my-file.txt”. To understand how this works, here is the definition of read-line:

(defn read-line [] (fn [s] [(.readLine (unlock s)) s]))

The call `(read-line)`

thus returns a state monad value: a function that takes a state argument `s`

, calls the Java method `readLine`

on the unlocked state, and returns a vector containing the freshly read line and the state argument. The function unlock is defined locally in a let form and is thus inaccessible from the outside. It retrieves the real state value from the wrapper that only serves to protect it.

Next, we need to look at `with-reader`

:

(defn with-reader [reader-spec statement] (with-open [r (reader reader-spec)] (first (statement (lock r)))))

The function `reader`

that it calls comes from clojure.contrib.duck-streams. It creates the stream reader object which is then locked (wrapped inside a closure) and passed to `statement`

, which happens to be the state monad value returned by `(read-line)`

. The `with-open`

macro ensures that the reader is closed. The return value of the with-reader function is the first item in the return value of the monadic statement; the second item is the state which is of no interest any more.

There are two levels of protection here: first, the reader object is never made accessible to the outside world. It is created, injected into the monadic statement, and then made invalid by closing it. The only way to get a reference would be write a monadic statement that exposes the state. This is indeed possible, and the statement is even provided under the name `fetch-state`

in clojure.contrib.monads. The following piece of code returns the state value:

(with-reader "my-file.txt" (fetch-state))

But here the second level of protection takes over: the return value is the *locked* version of the state, which happens to be a closure. That closure must be called with the right key in order to unlock the state, but the key is not accessible anywhere. Only a handful of functions in clojure.contrib.monadic-io-streams can unlock the state and work on it.

The typical way to do more complex I/O using this monadic approach is to define complex I/O statements by composing the basic I/O statements (`read`

, `read-line`

, `write`

, `print`

, etc.) in the state monad. This permits the construction of arbitrary I/O code, all in a purely functional way. In the end, the compound I/O statement is applied to an I/O stream using `with-reader`

or `with-writer`

. That part is necessarily not purely functional: when reading a file, nothing guarantees that the file will be the same every time it is read. But the non-functional part is now localized in a single place, and the complex aspect of I/O, the composition of I/O statements to do the required work, is purely functional.

As I said earlier, the same approach can be applied to working with mutable arrays. There would be function `make-array`

that takes a monadic array-modifying statement as its argument. This function would create an array, run it through the statement, and return the resulting modified array. The only array-modifying functions would be defined as state monad values. The net result would be a referentially transparent way to create a new array and have it initialized by an arbitrary algorithm. However, once returned from `make-array`

, the array would be immutable.

]]>

Scoping refers to where a function looks for the definitions of symbols that it doesn’t have locally. Given `(fn [x] (+ b x))`

, where does `b`

come from? With *lexical scoping*, b is taken from the lexical environment, i.e. the forms surrounding the function definition, or from the global namespace if the lexical environment doesn’t have a `b`

. The lookup typically happens when the function is compiled. With *dynamic scoping*, the lookup happens at runtime and by following the call stack: first the calling function is checked for a definition of `b`

, then the calle’s caller, etc.

By now the issue has been settled: lexical scoping is the default in all modern programming languages, including all modern Lisp dialects. And that includes Clojure, of course. Lexical scoping is more predictable and permits compile-time analysis of which value a symbol refers to. It also permits closures, which have become a popular technique in functional programming.

However, dynamic scoping is of use in some occasions and it can in fact be simulated in Clojure. In this post I will show how and what to watch out for.

First of all, why would one want dynamic scoping? Here is one example. Recently I wrote an implementation of macrolet and symbol-macrolet for Clojure. These are both macros that modify the macro expansion procedure by adding local macro definitions. This means that a stack of macro definitions must be maintained: each macrolet adds definitions to the stack, and at the end of the macrolet form the new definitions are popped off again.

One usual way to handle this would be to pass the stack around among the functions that do the macro expansion, which could modify it as needed. Another approach would be hiding this passed-around state using the state monad (see part 3 of my monad tutorial). But in the specific situation of macro expansion, Clojure’s `macroexpand-1`

enters into the call chain. I couldn’t modify this function to pass on the macro definition stack, so I had to work around it in some way. I could have avoided using `macroexpand-1`

, for example. But I chose to try simulating dynamic scoping at this occasion.

With dynamic scoping, a function that wants to modify the stack just redefines the variable containing it, calls the expansion recursively, and sets the stack back to its old value. A function accessing the stack just uses the current value of the variable, which is looked up in the call chain.

How can this be simulated in Clojure? The obvious candidate is `binding`

. The `binding`

form redefines a var in a namespace for the duration of the execution of its body. The redefinition is valid only inside the thread that is being executed, so other threads are not affected. In my macro expansion code, one of the stack vars is defined by

(defvar- macro-fns {})

and modified in each call to `macrolet`

:

(defmacro macrolet [fn-bindings & exprs] (let [names (map first fn-bindings) name-map (into {} (map (fn [n] [(list 'quote n) n]) names)) macro-map (eval `(letfn ~fn-bindings ~name-map))] (binding [macro-fns (merge macro-fns macro-map) macro-symbols (apply dissoc macro-symbols names)] `(do ~@(map expand-all exprs)))))

Well, that’s *almost* all there is to say about it. Except that the definition of `macrolet`

given above does not work.

The problem is laziness. The `binding`

form changes the definition of macro-fns only for the duration of the execution of its body and then resets it to its previous value. But the execution of the body is just a call to `map`

. This doesn’t actually do anything, it merely returns a package that calls `expand-all`

as soon as the first element of the sequence is requested. And that happens only after the `binding`

form has been left.

Once the problem is identified, the solution is simple: add a `doall`

around the `map`

:

(defmacro macrolet [fn-bindings & exprs] (let [names (map first fn-bindings) name-map (into {} (map (fn [n] [(list 'quote n) n]) names)) macro-map (eval `(letfn ~fn-bindings ~name-map))] (binding [macro-fns (merge macro-fns macro-map) macro-symbols (apply dissoc macro-symbols names)] `(do ~@(doall (map expand-all exprs))))))

However, adding all then necessary doalls requires careful attention. Forgetting does cause errors, but not necessarily immediately. For this reason, simulating dynamic scoping should be avoided except when there is a good reason. In retrospect, I am not even sure if my reason was good enough, and perhaps one day I will rewrite the code in a different way.

]]>

Basically, a monad transformer is a function that takes a monad argument and returns another monad. The returned monad is a variant of the one passed in to which some functionality has been added. The monad transformer defines that added functionality. Many of the common monads that I have presented before have monad transformer analogs that add the monad’s functionality to another monad. This makes monads modular by permitting client code to assemble monad building blocks into a customized monad that is just right for the task at hand.

Consider two monads that I have discussed before: the maybe monad and the sequence monad. The maybe monad is for computations that can fail to produce a valid value, and return nil in that case. The sequence monad is for computations that return multiple results, in the form of monadic values that are sequences. A monad combining the two can take two forms: 1) computations yielding multiple results, any of which could be `nil`

indicating failure 2) computations yielding either a sequence of results or `nil`

in the case of failure. The more interesting combination is 1), because 2) is of little practical use: failure can be represented more easily and with no additional effort by returning an empty result sequence.

So how can we create a monad that puts the maybe monad functionality inside sequence monad values? Is there a way we can reuse the existing implementations of the maybe monad and the sequence monad? It turns out that this is not possible, but we can keep one and rewrite the other one as a monad transformer, which we can then apply to the sequence monad (or in fact some other monad) to get the desired result. To get the combination we want, we need to turn the maybe monad into a transformer and apply it to the sequence monad.

First, as a reminder, the definitions of the maybe and the sequence monads:

(defmonad maybe-m [m-zero nil m-result (fn [v] v) m-bind (fn [mv f] (if (nil? mv) nil (f mv))) m-plus (fn [& mvs] (first (drop-while nil? mvs))) ]) (defmonad sequence-m [m-result (fn [v] (list v)) m-bind (fn [mv f] (apply concat (map f mv))) m-zero (list) m-plus (fn [& mvs] (apply concat mvs)) ])

And now the definition of the maybe monad transformer:

(defn maybe-t [m] (monad [m-result (with-monad m m-result) m-bind (with-monad m (fn [mv f] (m-bind mv (fn [x] (if (nil? x) (m-result nil) (f x)))))) m-zero (with-monad m m-zero) m-plus (with-monad m m-plus) ]))

The real definition in clojure.algo.monads is a bit more complicated, and I will explain the differences later, but for now this basic version is good enough. The combined monad is constructed by

(def maybe-in-sequence-m (maybe-t sequence-m))

which is a straightforward function call, the result of which is a monad. Let’s first look at what `m-result`

does. The `m-result`

of `maybe-m`

is the identity function, so we’d expect that our combined monad `m-result`

is just the one from `sequence-m`

. This is indeed the case, as `(with-monad m m-result) `

returns the `m-result`

function from monad `m`

. We see the same construct for `m-zero`

and `m-plus`

, meaning that all we need to understand is `m-bind`

.

The combined `m-bind`

calls the `m-bind`

of the base monad (`sequence-m`

in our case), but it modifies the function argument, i.e. the function that represents the rest of the computation. Before calling it, it first checks if its argument would

be `nil`

. If it isn’t, the original function is called, meaning that the combined monad behaves just like the base monad as long as no computation ever returns `nil`

. If there is a `nil`

value, the maybe monad says that no further computation should take place and that the final result should immediately be `nil`

. However, we can’t just return `nil`

, as we must return a valid monadic value in the combined monad (in our example, a sequence of possibly-`nil`

values). So we feed nil into the base monad’s `m-result`

, which takes care of wrapping up `nil`

in the required data structure.

Let’s see it in action:

(domonad maybe-in-sequence-m [x [1 2 nil 4] y [10 nil 30 40]] (+ x y))

The output is:

(11 nil 31 41 12 nil 32 42 nil 14 nil 34 44)

As expected, there are all the combinations of non-`nil`

values in both input sequences. However, it is surprising at first sight that there are four `nil`

entries. Shouldn’t there be eight, resulting from the combinations of a `nil`

in one sequence with the four values in the other sequence?

To understand why there are four `nil`

s, let’s look again at how the `m-bind`

definition in `maybe-t`

handles them. At the top level, it will be called with the vector `[1 2 nil 4]`

as the monadic value. It hands this to the `m-bind`

of `sequence-m`

, which calls the

anonymous function in `maybe-t`

‘s `m-bind`

four times, once for each element of the vector. For the three non-`nil`

values, no special treatment is added. For the one `nil`

value, the net result of the computation is `nil`

and the rest of the computation is never called. The `nil`

in the first input vector thus accounts for one `nil`

in the result, and the rest of the computation is called three times. Each of these three rounds produces then three valid results and one `nil`

. We thus have 3×3 valid results, 3×1 `nil`

from the second vector, plus the one `nil`

from the first vector. That makes nine valid results and four `nil`

s.

Is there a way to get all sixteen combinations, with all the possible `nil`

results in the result? Yes, but not using the `maybe-t`

transformer. You have to use the maybe and the sequence monads separately, for example like this:

(with-monad maybe-m (def maybe-+ (m-lift 2 +))) (domonad sequence-m [x [1 2 nil 4] y [10 nil 30 40]] (maybe-+ x y))

When you use `maybe-t`

, you always get the shortcutting behaviour seen above: as soon as there is a `nil`

, the total result is `nil`

and the rest of the computation is never executed. In most situations, that’s what you want.

The combination of `maybe-t`

and `sequence-m`

is not so useful in practice because a much easier (and more efficient) way to handle invalid results is to remove them from the sequences before any further processing happens. But the example is simple and thus fine for explaining the basics. You are now ready for a more realistic example: the use of `maybe-t`

with the

probability distribution monad.

The probability distribution monad is made for working with finite probability distributions, i.e. probability distributions in which a finite set of values has a non-zero probability. Such a distribution is represented by a map from the values to their probabilities. The monad and various useful functions for working with finite distributions is defined in the

library clojure.contrib.probabilities.finite-distributions (*NOTE: this module has not yet been migrated to the new Clojure contrib library set.*).

A simple example of a finite distribution:

(use 'clojure.contrib.probabilities.finite-distributions) (def die (uniform #{1 2 3 4 5 6})) (prob odd? die)

This prints `1/2`

, the probability that throwing a single die yields an odd number. The value of `die`

is the probability distribution of the outcome of throwing a die:

{6 1/6, 5 1/6, 4 1/6, 3 1/6, 2 1/6, 1 1/6}

Suppose we throw the die twice and look at the sum of the two values. What is its probability distribution? That’s where the monad comes in:

(domonad dist-m [d1 die d2 die] (+ d1 d2))

The result is:

{2 1/36, 3 1/18, 4 1/12, 5 1/9, 6 5/36, 7 1/6, 8 5/36, 9 1/9, 10 1/12, 11 1/18, 12 1/36}

You can read the above domonad block as ‘draw a value from the distribution `die`

and call it `d1`

, draw a value from the distribution `die`

and call it `d2`

, then give me the distribution of `(+ d1 d2)`

‘. This is a very simple example; in general, each distribution can depend on the values drawn from the preceding ones, thus creating the joint distribution of several variables. This approach is known as ‘ancestral sampling’.

The monad `dist-m`

applies the basic rule of combining probabilities: if event A has probability p and event B has probability q, and if the events are independent (or at least uncorrelated), then the probability of the combined event (A and B) is p*q. Here is the definition of `dist-m`

:

(defmonad dist-m [m-result (fn [v] {v 1}) m-bind (fn [mv f] (letfn [(add-prob [dist [x p]] (assoc dist x (+ (get dist x 0) p)))] (reduce add-prob {} (for [[x p] mv [y q] (f x)] [y (* q p)])))) ])

As usually, the interesting stuff happens in `m-bind`

. Its first argument, `mv`

, is a map representing a probability distribution. Its second argument, `f`

, is a function representing the rest of the calculation. It is called for each possible value in the probability distribution in the `for`

form. This `for`

form iterates over both the possible values of the input distribution and the possible values of the distribution returned by `(f x)`

, combining the probabilities by multiplication and putting them into the output distribution. This is done by reducing over the helper function `add-prob`

, which checks if the value is already present in the map, and if so, adds the probability to the previously obtained one. This is necessary because the samples from the `(f x)`

distribution can contain the same value more than once if they were obtained for different `x`

.

For a more interesting example, let’s consider the famous Monty Hall problem. In a game show, the player faces three doors. A prize is waiting for him behind one of them, but there is nothing behind the two other ones. If he picks the right door, he gets the prize. Up to there, the problem is simple: the probability of winning is 1/3.

But there is a twist. After the player makes his choice, the game host open one of the two other doors, which shows an empty space. He then asks the player if he wants to change his mind and choose the last remaining door instead of his initial choice. Is this a good strategy?

To make this a well-defined problem, we have to assume that the game host knows where the prize is and that he would not open the corresponding door. Then we can start coding:

(def doors #{:A :B :C}) (domonad dist-m [prize (uniform doors) choice (uniform doors)] (if (= choice prize) :win :loose))

Let’s go through this step by step. First, we choose the prize door by drawing from a uniform distribution over the three doors `:A`

, `:B`

, and `:C`

. That represents what happens before the player comes in. Then the player’s initial choice is made, drawing from the same distribution. Finally, we ask for the distribution of the outcome of the game, code>:win or `:loose`

. The answer is, unsurprisingly, `{:win 1/3, :loose 2/3}`

.

This covers the case in which the player does not accept the host's proposition to change his mind. If he does, the game becomes more complicated:

(domonad dist-m [prize (uniform doors) choice (uniform doors) opened (uniform (disj doors prize choice)) choice (uniform (disj doors opened choice))] (if (= choice prize) :win :loose))

The third step is the most interesting one: the game host opens a door which is neither the prize door nor the initial choice of the player. We model this by removing both prize and choice from the set of doors, and draw uniformly from the resulting set, which can have one or two elements depending on prize and choice. The player then changes his mind and chooses from the set of doors other than the open one and his initial choice. With the standard three-door game, that set has exactly one element, but the code above also works for a larger number of doors - try it out yourself!

Evaluating this piece of code yields `{:loose 1/3, :win 2/3}`

, indicating that the change-your-mind strategy is indeed the better one.

Back to the `maybe-t`

transformer. The finite-distribution library defines a second monad by

(def cond-dist-m (maybe-t dist-m))

This makes `nil`

a special value in distributions, which is used to represent events that we don't want to consider as possible ones. With the definitions of `maybe-t`

and `dist-m`

, you can guess how `nil`

values are propagated when distributions are combined: for any `nil`

value, the distributions that potentially depend on it are never evaluated, and the `nil`

value's probability is transferred entirely to the probability of `nil`

in the output distribution. But how does `nil`

ever get into a distribution? And, most of all, what is that good for?

Let's start with the last question. The goal of this `nil`

-containing distributions is to eliminate certain values. Once the final distribution is obtained, the `nil`

value is removed, and the remaining distribution is normalized to make the sum of the probabilities of the remaining values equal to one. This `nil`

-removal and normalization is performed by the utility function `normalize-cond`

. The `cond-dist-m`

monad is thus a sophisticated way to compute conditional probabilities, and in particular to facilitate Bayesian inference, which is an important technique in all kinds of data analysis.

As a first exercice, let's calculate a simple conditional probability from an input distribution and a predicate. The output distribution should contain only the values satisfying the predicate, but be normalized to one:

(defn cond-prob [pred dist] (normalize-cond (domonad cond-dist-m [v dist :when (pred v)] v))))

The important line is the one with the `:when`

condition. As I have explained in parts 1 and 2, the `domonad`

form becomes

(m-bind dist (fn [v] (if (pred v) (m-result v) m-zero)))

If you have been following carefully, you should complain now: with the definitions of `dist-m`

and `maybe-t`

I have given above, `cond-dist-m`

should not have any `m-zero`

! But as I said earlier, the `maybe-t`

shown here is a simplified version. The real one checks if the base monad has `m-zero`

, and if it hasn't, it substitutes its own, which is `(with-monad m (m-result nil))`

. Therefore the `m-zero`

of `cond-dist-m`

is `{nil 1}`

, the distribution whose only value is `nil`

.

The net effect of the `domonad`

form in this example is thus to keep all values that satisfy the predicate with their initial probabilities, but to transfer the probability of all values to `nil`

. The call to `normalize-cond`

then takes out the `nil`

and re-distributes its probability to the other values. Example:

(cond-prob odd? die) -> {5 1/3, 3 1/3, 1 1/3}

The `cond-dist-m`

monad really becomes interesting for Bayesian inference problems. Bayesian inference is technique for drawing conclusions from incomplete observations. It has a wide range of applications, from spam filters to weather forecasts. For an introduction to the technique and its mathematical basis, you can start with the Wikipedia article.

Here I will discuss a very simple inference problem and its solution in Clojure. Suppose someone has three dice, one with six faces, one with eight, and one with twelve. This person picks one die, throws it a few times, and gives us the numbers, but doesn't tell us which die it was. Given these observations, we would like to infer the probabilities for each of the three dice to have been picked. We start by defining a function that returns the distribution of a die with n faces:

(defn die-n [n] (uniform (range 1 (inc n))))

Next, we come to the core of Bayesian inference. One central ingredient is the probability for throwing a given number under the assumption that die X was used. We thus need the probability distributions for each of our three dice:

(def dice {:six (die-n 6) :eight (die-n 8 ) :twelve (die-n 12)})

The other central ingredient is a distribution representing our 'prior knowledge' about the chosen die. We actually know nothing at all, so each die has the same weight in this distribution:

(def prior (uniform (keys dice)))

Now we can write the inference function. It takes as input the prior-knowledge distribution and a number that was obtained from the die. It returns the *a posteriori* distribution that combines the prior information with the information from the observation.

(defn add-observation [prior observation] (normalize-cond (domonad cond-dist-m [die prior number (get dice die) :when (= number observation)] die)))

Let's look at the `domonad`

form. The first step picks one die according to the prior knowledge. The second line "throws" that die, obtaining a number. The third line eliminates the numbers that don't match the observation. And then we ask for the distribution of the die.

It is instructive to compare this function with the mathematical formula for Bayes' theorem, which is the basis of Bayesian inference. Bayes' theorem is P(H|E) = P(E|H) P(H) / P(E), where H stands for the hypothesis ("the die chosen was X") and E stands for the evidence ("the number thrown was N"). P(H) is the prior knowledge. The formula must be evaluated for a fixed value of E, which is the observation.

The first line of our `domonad`

form implements P(H), the second line implements P(E|H). These two lines together thus sample P(E, H) using ancestral sampling, as we have seen before. The `:when`

line represents the observation; we wish to apply Bayes' theorem for a fixed value of E. Once E has been fixed, P(E) is just a number, required for normalization. This is handled by `normalize-cond`

in our code.

Let's see what happens when we add a single observation:

(add-observation prior 1) -> {:twelve 2/9, :eight 1/3, :six 4/9}

We see that the highest probability is given to `:six`

, then `:eight`

, and finally `:twelve`

. This happens because 1 is a possible value for all dice, but it is more probable as a result of throwing a six-faced die (1/6) than as a result of throwing an eight-faced die (1/8) or a twelve-faced die (1/12). The observation thus favours a die with a small number of faces.

If we have three observations, we can call add-observation repeatedly:

(-> prior (add-observation 1) (add-observation 3) (add-observation 7)) -> {:twelve 8/35, :eight 27/35}

Now we see that the candidate `:six`

has disappeared. In fact, the observed value of 7 rules it out completely. Moreover, the observed numbers strongly favour `:eight`

over `:twelve`

, which is again due to the preference for the smallest possible die in the game.

This inference problem is very similar to how a spam filter works. In that case, the three dice are replaced by the choices `:spam`

or `:no-spam`

. For each of them, we have a distribution of words, obtained by analyzing large quantities of e-mail messages. The function add-observation is strictly the same, we'd just pick different variable names. And then we'd call it for each word in the message we wish to evaluate, starting from a prior distribution defined by the total number of `:spam`

and `:no-spam`

messages in our database.

To end this introduction to monad transformers, I will explain the `m-zero`

problem in `maybe-t`

. As you know, the maybe monad has an `m-zero`

definition (`nil`

) and an `m-plus`

definition, and those two can be carried over into a monad created by applying `maybe-t`

to some base monad. This is what we have seen in the case of `cond-dist-m`

. However, the base monad might have its own `m-zero`

and `m-plus`

, as we have seen in the case of `sequence-m`

. Which set of definitions should the combined monad have? Only the user of `maybe-t`

can make that decision, so `maybe-t`

has an optional parameter for this (see its documentation for the details). The only clear case is a base monad without `m-zero`

and `m-plus`

; in that case, nothing is lost if `maybe-t`

imposes its own.

]]>

- A data structure that represents the result of a computation, or the computation itself. We haven’t seen an example of the latter case yet, but it will come soon.
- A function
`m-result`

that converts an arbitrary value to a monadic data structure equivalent to that value. - A function
`m-bind`

that binds the result of a computation, represented by the monadic data structure, to a name (using a function of one argument) to make it available in the following computational step.

Taking the sequence monad as an example, the data structure is the sequence, representing the outcome of a non-deterministic computation, `m-result`

is the function list, which converts any value into a list containing just that value, and `m-bind`

is a function that executes the remaining steps once for each element in a sequence, and removes one level of nesting in the result.

The three ingredients above are what defines a monad, under the condition that the three monad laws are respected. Some monads have two additional definitions that make it possible to perform additional operations. These two definitions have the names `m-zero`

and `m-plus`

. `m-zero`

represents a special monadic value that corresponds to a computation with no result. One example is `nil`

in the maybe monad, which typically represents a failure of some kind. Another example is the empty sequence in the sequence monad. The identity monad is an example of a monad that has no `m-zero`

.

`m-plus`

is a function that combines the results of two or more computations into a single one. For the sequence monad, it is the concatenation of several sequences. For the maybe monad, it is a function that returns the first of its arguments that is not `nil`

.

There is a condition that has to be satisfied by the definitions of `m-zero`

and `m-plus`

for any monad:

(= (m-plus m-zero monadic-expression) (m-plus monadic-expression m-zero) monadic-expression)

In words, combining `m-zero`

with any monadic expression must yield the same expression. You can easily verify that this is true for the two examples (maybe and sequence) given above.

One benefit of having an `m-zero`

in a monad is the possibility to use conditions. In the first part, I promised to return to the `:when`

clauses in Clojure’s for forms, and now the time has come to discuss them. A simple example is

(for [a (range 5) :when (odd? a)] (* 2 a))

The same construction is possible with `domonad`

:

(domonad sequence [a (range 5) :when (odd? a)] (* 2 a))

Recall that `domonad`

is a macro that translates a `let`

-like syntax into a chain of calls to `m-bind`

ending in a call to `m-result`

. The clause `a (range 5)`

becomes

(m-bind (range 5) (fn [a] remaining-steps))

where `remaining-steps`

is the transformation of the rest of the `domonad`

form. A `:when`

clause is of course treated specially, it becomes

(if predicate remaining-steps m-zero)

Our small example thus expands to

(m-bind (range 5) (fn [a] (if (odd? a) (m-result (* 2 a)) m-zero)))

Inserting the definitions of `m-bind`

, `m-result`

, and `m-zero`

, we finally get

(apply concat (map (fn [a] (if (odd? a) (list (* 2 a)) (list))) (range 5)))

The result of `map`

is a sequence of lists that have zero or one elements: zero for even values (the value of `m-zero`

) and one for odd values (produced by `m-result`

). `concat`

makes a single flat list out of this, which contains only the elements that satisfy the `:when`

clause.

As for `m-plus`

, it is in practice used mostly with the maybe and sequence monads, or with variations of them. A typical use would be a search algorithm (think of a parser, a regular expression search, a database query) that can succeed (with one or more results) or fail (no results). `m-plus`

would then be used to pursue alternative searches and combine the results into one (sequence monad), or to continue searching until a result is found (maybe monad). Note that it is perfectly possible in principle to have a monad with an `m-zero`

but no `m-plus`

, though in all common cases an `m-plus`

can be defined as well if an `m-zero`

is known.

After this bit of theory, let’s get acquainted with more monads. In the beginning of this part, I mentioned that the data structure used in a monad does not always represent the result(s) of a computational step, but sometimes the computation itself. An example of such a monad is the state monad, whose data structure is a function.

The state monad’s purpose is to facilitate the implementation of stateful algorithms in a purely functional way. Stateful algorithms are algorithms that require updating some variables. They are of course very common in imperative languages, but not compatible with the basic principle of pure functional programs which should not have mutable data structures. One way to simulate state changes while remaining purely functional is to have a special data item (in Clojure that would typically be a map) that stores the current values of all mutable variables that the algorithm refers to. A function that in an imperative program would modify a variable now takes the current state as an additional input argument and returns an updated state along with its usual result. The changing state thus becomes explicit in the form of a data item that is passed from function to function as the algorithm’s execution progresses. The state monad is a way to hide the state-passing behind the scenes and write an algorithm in an imperative style that consults and modifies the state.

The state monad differs from the monads that we have seen before in that its data structure is a function. This is thus a case of a monad whose data structure represents not the result of a computation, but the computation itself. A state monad value is a function that takes a single argument, the current state of the computation, and returns a vector of length two containing the result of the computation and the updated state after the computation. In practice, these functions are typically closures, and what you use in your program code are functions that create these closures. Such state-monad-value-generating functions are the equivalent of statements in imperative languages. As you will see, the state monad allows you to compose such functions in a way that makes your code look perfectly imperative, even though it is still purely functional!

Let’s start with a simple but frequent situation: the state that your code deals with takes the form of a map. You may consider that map to be a namespace in an imperative languages, with each key defining a variable. Two basic operations are reading the value of a variable, and modifying that value. They are already provided in the Clojure monad library, but I will show them here anyway because they make nice examples.

First, we look at `fetch-val`

, which retrieves the value of a variable:

(defn fetch-val [key] (fn [s] [(key s) s]))

Here we have a simple state-monad-value-generating function. It returns a function of a state variable `s`

which, when executed, returns a vector of the return value and the new state. The return value is the value corresponding to the key in the map that is the state value. The new state is just the old one – a lookup should not change the state of course.

Next, let’s look at `set-val`

, which modifies the value of a variable and returns the previous value:

(defn set-val [key val] (fn [s] (let [old-val (get s key) new-s (assoc s key val)] [old-val new-s])))

The pattern is the same again: `set-val`

returns a function of state `s`

that, when executed, returns the old value of the variable plus an updated state map in which the new value is the given one.

With these two ingredients, we can start composing statements. Let’s define a statement that copies the value of one variable into another one and returns the previous value of the modified variable:

(defn copy-val [from to] (domonad state-m [from-val (fetch-val from) old-to-val (set-val to from-val)] old-to-val))

What is the result of `copy-val`

? A state-monad value, of course: a function of a state variable `s`

that, when executed, returns the old value of variable to plus the state in which the copy has taken place. Let’s try it out:

(let [initial-state {:a 1 :b 2} computation (copy-val :b :a) [result final-state] (computation initial-state)] final-state)

We get {:a 2, :b 2}, as expected. But how does it work? To understand the state monad, we need to look at its definitions for `m-result`

and `m-bind`

, of course.

First, `m-result`

, which does not contain any surprises: it returns a function of a state variable `s`

that, when executed, returns the result value `v`

and the unchanged state `s`

:

(defn m-result [v] (fn [s] [v s]))

The definition of `m-bind`

is more interesting:

(defn m-bind [mv f] (fn [s] (let [[v ss] (mv s)] ((f v) ss))))

Obviously, it returns a function of a state variable `s`

. When that function is executed, it first runs the computation described by `mv`

(the first ‘statement’ in the chain set up by` m-bind`

) by applying it to the state `s`

. The return value is decomposed into result `v`

and new state `ss`

. The result of the first step, `v`

, is injected into the rest of the computation by calling `f`

on it (like for the other `m-bind`

functions that we have seen). The result of that call is of course another state-monad value, and thus a function of a state variable. When we are inside our `(fn [s] ...)`

, we are already at the execution stage, so we have to call that function on the state `ss`

, the one that resulted from the execution of the first computational step.

The state monad is one of the most basic monads, of which many variants are in use. Usually such a variant adds something to `m-bind`

that is specific to the kind of state being handled. An example is the the stream monad in `clojure.contrib.stream-utils`

. (*NOTE: the stream monad has not been migrated to the new Clojure contrib library set.*) Its state describes a stream of data items, and the` m-bind`

function checks for invalid values and for the end-of-stream condition in addition to what the basic `m-bind`

of the state monad does.

A variant of the state monad that is so frequently used that has itself become one of the standard monads is the writer monad. Its state is an accumulator (any type implementing th e protocol `writer-monad-protocol`

, for example strings, lists, vectors, and sets), to which computations can add something by calling the function `write`

. The name comes from a particularly popular application: logging. Take a basic computation in the identity monad, for example (remember that the identity monad is just Clojure’s built-in `let`

). Now assume you want to add a protocol of the computation in the form of a list or a string that accumulates information about the progress of the computation. Just change the identity monad to the writer monad, and add calls to `write`

where required!

Here is a concrete example: the well-known Fibonacci function in its most straightforward (and most inefficient) implementation:

(defn fib [n] (if (< n 2) n (let [n1 (dec n) n2 (dec n1)] (+ (fib n1) (fib n2)))))

Let’s add some protocol of the computation in order to see which calls are made to arrive at the final result. First, we rewrite the above example a bit to make every computational step explicit:

(defn fib [n] (if (< n 2) n (let [n1 (dec n) n2 (dec n1) f1 (fib n1) f2 (fib n2)] (+ f1 f2))))

Second, we replace `let`

by `domonad`

and choose the writer monad with a vector accumulator:

(with-monad (writer-m []) (defn fib-trace [n] (if (< n 2) (m-result n) (domonad [n1 (m-result (dec n)) n2 (m-result (dec n1)) f1 (fib-trace n1) _ (write [n1 f1]) f2 (fib-trace n2) _ (write [n2 f2]) ] (+ f1 f2)))) )

Finally, we run `fib-trace`

and look at the result:

(fib-trace 3) [2 [[1 1] [0 0] [2 1] [1 1]]]

The first element of the return value, 2, is the result of the function `fib`

. The second element is the protocol vector containing the arguments and results of the recursive calls.

Note that it is sufficient to comment out the lines with the calls to `write`

and change the monad to `identity-m`

to obtain a standard fib function with no protocol – try it out for yourself!

Part 4 will show you how to define your own monads by combining monad building blocks called monad transformers. As an illustration, I will explain the probability monad and how it can be used for Bayesian estimates when combined with the maybe-transformer.

]]>