Before moving on to the more advanced aspects of monads, let’s recapitulate what defines a monad (see part 1 and part 2 for explanations):

- A data structure that represents the result of a computation, or the computation itself. We haven’t seen an example of the latter case yet, but it will come soon.
- A function
`m-result`

that converts an arbitrary value to a monadic data structure equivalent to that value. - A function
`m-bind`

that binds the result of a computation, represented by the monadic data structure, to a name (using a function of one argument) to make it available in the following computational step.

Taking the sequence monad as an example, the data structure is the sequence, representing the outcome of a non-deterministic computation, `m-result`

is the function list, which converts any value into a list containing just that value, and `m-bind`

is a function that executes the remaining steps once for each element in a sequence, and removes one level of nesting in the result.

The three ingredients above are what defines a monad, under the condition that the three monad laws are respected. Some monads have two additional definitions that make it possible to perform additional operations. These two definitions have the names `m-zero`

and `m-plus`

. `m-zero`

represents a special monadic value that corresponds to a computation with no result. One example is `nil`

in the maybe monad, which typically represents a failure of some kind. Another example is the empty sequence in the sequence monad. The identity monad is an example of a monad that has no `m-zero`

.

`m-plus`

is a function that combines the results of two or more computations into a single one. For the sequence monad, it is the concatenation of several sequences. For the maybe monad, it is a function that returns the first of its arguments that is not `nil`

.

There is a condition that has to be satisfied by the definitions of `m-zero`

and `m-plus`

for any monad:

(= (m-plus m-zero monadic-expression) (m-plus monadic-expression m-zero) monadic-expression)

In words, combining `m-zero`

with any monadic expression must yield the same expression. You can easily verify that this is true for the two examples (maybe and sequence) given above.

One benefit of having an `m-zero`

in a monad is the possibility to use conditions. In the first part, I promised to return to the `:when`

clauses in Clojure’s for forms, and now the time has come to discuss them. A simple example is

(for [a (range 5) :when (odd? a)] (* 2 a))

The same construction is possible with `domonad`

:

(domonad sequence [a (range 5) :when (odd? a)] (* 2 a))

Recall that `domonad`

is a macro that translates a `let`

-like syntax into a chain of calls to `m-bind`

ending in a call to `m-result`

. The clause `a (range 5)`

becomes

(m-bind (range 5) (fn [a] remaining-steps))

where `remaining-steps`

is the transformation of the rest of the `domonad`

form. A `:when`

clause is of course treated specially, it becomes

(if predicate remaining-steps m-zero)

Our small example thus expands to

(m-bind (range 5) (fn [a] (if (odd? a) (m-result (* 2 a)) m-zero)))

Inserting the definitions of `m-bind`

, `m-result`

, and `m-zero`

, we finally get

(apply concat (map (fn [a] (if (odd? a) (list (* 2 a)) (list))) (range 5)))

The result of `map`

is a sequence of lists that have zero or one elements: zero for even values (the value of `m-zero`

) and one for odd values (produced by `m-result`

). `concat`

makes a single flat list out of this, which contains only the elements that satisfy the `:when`

clause.

As for `m-plus`

, it is in practice used mostly with the maybe and sequence monads, or with variations of them. A typical use would be a search algorithm (think of a parser, a regular expression search, a database query) that can succeed (with one or more results) or fail (no results). `m-plus`

would then be used to pursue alternative searches and combine the results into one (sequence monad), or to continue searching until a result is found (maybe monad). Note that it is perfectly possible in principle to have a monad with an `m-zero`

but no `m-plus`

, though in all common cases an `m-plus`

can be defined as well if an `m-zero`

is known.

After this bit of theory, let’s get acquainted with more monads. In the beginning of this part, I mentioned that the data structure used in a monad does not always represent the result(s) of a computational step, but sometimes the computation itself. An example of such a monad is the state monad, whose data structure is a function.

The state monad’s purpose is to facilitate the implementation of stateful algorithms in a purely functional way. Stateful algorithms are algorithms that require updating some variables. They are of course very common in imperative languages, but not compatible with the basic principle of pure functional programs which should not have mutable data structures. One way to simulate state changes while remaining purely functional is to have a special data item (in Clojure that would typically be a map) that stores the current values of all mutable variables that the algorithm refers to. A function that in an imperative program would modify a variable now takes the current state as an additional input argument and returns an updated state along with its usual result. The changing state thus becomes explicit in the form of a data item that is passed from function to function as the algorithm’s execution progresses. The state monad is a way to hide the state-passing behind the scenes and write an algorithm in an imperative style that consults and modifies the state.

The state monad differs from the monads that we have seen before in that its data structure is a function. This is thus a case of a monad whose data structure represents not the result of a computation, but the computation itself. A state monad value is a function that takes a single argument, the current state of the computation, and returns a vector of length two containing the result of the computation and the updated state after the computation. In practice, these functions are typically closures, and what you use in your program code are functions that create these closures. Such state-monad-value-generating functions are the equivalent of statements in imperative languages. As you will see, the state monad allows you to compose such functions in a way that makes your code look perfectly imperative, even though it is still purely functional!

Let’s start with a simple but frequent situation: the state that your code deals with takes the form of a map. You may consider that map to be a namespace in an imperative languages, with each key defining a variable. Two basic operations are reading the value of a variable, and modifying that value. They are already provided in the Clojure monad library, but I will show them here anyway because they make nice examples.

First, we look at `fetch-val`

, which retrieves the value of a variable:

(defn fetch-val [key] (fn [s] [(key s) s]))

Here we have a simple state-monad-value-generating function. It returns a function of a state variable `s`

which, when executed, returns a vector of the return value and the new state. The return value is the value corresponding to the key in the map that is the state value. The new state is just the old one – a lookup should not change the state of course.

Next, let’s look at `set-val`

, which modifies the value of a variable and returns the previous value:

(defn set-val [key val] (fn [s] (let [old-val (get s key) new-s (assoc s key val)] [old-val new-s])))

The pattern is the same again: `set-val`

returns a function of state `s`

that, when executed, returns the old value of the variable plus an updated state map in which the new value is the given one.

With these two ingredients, we can start composing statements. Let’s define a statement that copies the value of one variable into another one and returns the previous value of the modified variable:

(defn copy-val [from to] (domonad state-m [from-val (fetch-val from) old-to-val (set-val to from-val)] old-to-val))

What is the result of `copy-val`

? A state-monad value, of course: a function of a state variable `s`

that, when executed, returns the old value of variable to plus the state in which the copy has taken place. Let’s try it out:

(let [initial-state {:a 1 :b 2} computation (copy-val :b :a) [result final-state] (computation initial-state)] final-state)

We get {:a 2, :b 2}, as expected. But how does it work? To understand the state monad, we need to look at its definitions for `m-result`

and `m-bind`

, of course.

First, `m-result`

, which does not contain any surprises: it returns a function of a state variable `s`

that, when executed, returns the result value `v`

and the unchanged state `s`

:

(defn m-result [v] (fn [s] [v s]))

The definition of `m-bind`

is more interesting:

(defn m-bind [mv f] (fn [s] (let [[v ss] (mv s)] ((f v) ss))))

Obviously, it returns a function of a state variable `s`

. When that function is executed, it first runs the computation described by `mv`

(the first ‘statement’ in the chain set up by` m-bind`

) by applying it to the state `s`

. The return value is decomposed into result `v`

and new state `ss`

. The result of the first step, `v`

, is injected into the rest of the computation by calling `f`

on it (like for the other `m-bind`

functions that we have seen). The result of that call is of course another state-monad value, and thus a function of a state variable. When we are inside our `(fn [s] ...)`

, we are already at the execution stage, so we have to call that function on the state `ss`

, the one that resulted from the execution of the first computational step.

The state monad is one of the most basic monads, of which many variants are in use. Usually such a variant adds something to `m-bind`

that is specific to the kind of state being handled. An example is the the stream monad in `clojure.contrib.stream-utils`

. (*NOTE: the stream monad has not been migrated to the new Clojure contrib library set.*) Its state describes a stream of data items, and the` m-bind`

function checks for invalid values and for the end-of-stream condition in addition to what the basic `m-bind`

of the state monad does.

A variant of the state monad that is so frequently used that has itself become one of the standard monads is the writer monad. Its state is an accumulator (any type implementing th e protocol `writer-monad-protocol`

, for example strings, lists, vectors, and sets), to which computations can add something by calling the function `write`

. The name comes from a particularly popular application: logging. Take a basic computation in the identity monad, for example (remember that the identity monad is just Clojure’s built-in `let`

). Now assume you want to add a protocol of the computation in the form of a list or a string that accumulates information about the progress of the computation. Just change the identity monad to the writer monad, and add calls to `write`

where required!

Here is a concrete example: the well-known Fibonacci function in its most straightforward (and most inefficient) implementation:

(defn fib [n] (if (< n 2) n (let [n1 (dec n) n2 (dec n1)] (+ (fib n1) (fib n2)))))

Let’s add some protocol of the computation in order to see which calls are made to arrive at the final result. First, we rewrite the above example a bit to make every computational step explicit:

(defn fib [n] (if (< n 2) n (let [n1 (dec n) n2 (dec n1) f1 (fib n1) f2 (fib n2)] (+ f1 f2))))

Second, we replace `let`

by `domonad`

and choose the writer monad with a vector accumulator:

(with-monad (writer-m []) (defn fib-trace [n] (if (< n 2) (m-result n) (domonad [n1 (m-result (dec n)) n2 (m-result (dec n1)) f1 (fib-trace n1) _ (write [n1 f1]) f2 (fib-trace n2) _ (write [n2 f2]) ] (+ f1 f2)))) )

Finally, we run `fib-trace`

and look at the result:

(fib-trace 3) [2 [[1 1] [0 0] [2 1] [1 1]]]

The first element of the return value, 2, is the result of the function `fib`

. The second element is the protocol vector containing the arguments and results of the recursive calls.

Note that it is sufficient to comment out the lines with the calls to `write`

and change the monad to `identity-m`

to obtain a standard fib function with no protocol – try it out for yourself!

Part 4 will show you how to define your own monads by combining monad building blocks called monad transformers. As an illustration, I will explain the probability monad and how it can be used for Bayesian estimates when combined with the maybe-transformer.

I’d love to see a concrete example of using the state monad to do logging. I can’t quite get it from your description without seeing it in action, because it sounds like you can only do a write command in a place where you’d typically do a let, rather than at an arbitrary point in a sequence of statements.

Comment by Puzzler — March 23, 2009 @ 7:42 pm

I have added a concrete example: the Fibonacci function with a tracing system for recursive calls. I hope this makes it clearer!

Comment by khinsen — March 24, 2009 @ 9:31 am

Yes, much clearer. But now I’m wondering whether it is possible to combine the behavior of two monads in one body of code. For example, what if you want to be able to “write” some lines, and on other lines get the “maybe” behavior, and on other lines get the “identity” behavior?

Comment by Puzzler — March 24, 2009 @ 6:23 pm

Combining monads using monad transformers will be the topic of part 4. It is not a general possibilty; you can’t combine any monad with any other one, but it covers a lot of situations that come up in practice.

Of course, nothing prevents you from using one monad within another if that’s what your problem requires. For example, you can well have a “domonad writer-m” and inside one the right-hand-side expressions put a “domonad maybe-m”.

Monad transformers are different in that they produce one monad that combines the aspects of two (or more) individual monads.

Comment by khinsen — March 25, 2009 @ 8:55 am

A quote from Rich Hickey that I like is ‘mutability is the new spaghetti code’: if you can mutate an object, you can produce ‘action at a distance’ that seems convenient at first, but in the end makes your program logic harder to follow.

I understand that the state monad makes it possible to make your code seem stateful, but keeps it possible to analyze it as a purely functional program. But does that convenience drop you back into ‘the new spaghetti code’, or are programs with the state monad easier to follow (for human coders) than programs with genuine mutability?

Comment by Anand Patil — April 6, 2009 @ 5:58 pm

I don’t think there is a clear yes/no answer to that question. It is possible to write hard-to-read code in any style, including pure functional, and of course also including the state monad.

However, there are a few important differences between programming with mutable state and programming in the state monad. There is no real mutable state in the state monad, so in terms of concurrency state monad code is definitely purely functional. But most importantly, the state monad is just one tool in the functional programer’s toolbox, and it should be used only where it makes the code clearer. This is the case for algorithms where keeping track of a state is required anyway, a classical example being labelling a graph. Programmers who use the state monad as a convenient way to write imperative code in a functional language will most likely produce bad code. As a rule of thumb, I’d say that if you start implementing loop constructs in the state monad, you are probably going too far.

Comment by khinsen — April 14, 2009 @ 7:56 am

Speaking from experience, and NOT using monad transformers, the ease of use of a state monad is about the same as any other imperative programming language. The nice thing about using the state monad with Haskell, however, is you get the advantage of its type inferencing/checking, which is a significant plus. This prevents you from mixing up your monads, which in turn, results in programs where the logic of multiple parts of a program are kept separate _syntactically_ and is compiler-checkable.

Again, introducing monad transformers muddles the water a bit, but once again, it’s statically checkable. You cannot mix multiple monads without having first declared it first. This might seem like a big burden; in fact, however, it aids greatly in program cleanliness and modularity.

ONCE you wrap your head around transformers. ;)

Comment by Samuel A. Falvo II — April 16, 2009 @ 12:24 am

I think that ease of understanding monadic code is a key issue, which is distinct from “ease of use” (for the programmer). For code that MUST be understood in detail by others (e.g., in situations requiring high assurance of security policy enforcement), macros and new language features that make the semantics of monads more explicit, and thus clearer, can be very helpful. One man’s clutter is another man’s key, I suppose, but in my view, the use of raw monads and related transformers is problematic in high assurance environments, especially when combined with implicit typing.

Comment by Tim Levin — June 17, 2009 @ 7:39 pm

[...] part 3, I will come back to the :when clause in for loops, and show how it is implemented and generalized [...]

Pingback by A monad tutorial for Clojure programmers (part 2) « On Clojure — April 24, 2009 @ 4:14 pm

[...] mutable state in the state monad Filed under: Uncategorized — khinsen @ 4:19 pm In part 3 of my monad tutorial for Clojure programmers, I explained that the main purpose of the state monad [...]

Pingback by Protecting mutable state in the state monad « On Clojure — June 24, 2009 @ 4:21 pm

Sorry to pick nits, but I think your use of the term “deterministic” is wrong. A deterministic algorithm is one for which the output can be completely predicted. In other words, it does not depend on “outside” influences. A sequence is deterministic if it can be calculated using only the input values to the algorithm itself. The Fibonacci sequence is deterministic, whereas a pseudo-random-number generator seeded by outside entropy would not be.

Comment by G. Ralph Kuntz, MD — May 5, 2010 @ 11:09 am

Unfortunately the term “deterministic” is used in (at least) two different meanings when referring to algorithms. I don’t like the one used here either, but followed what seems common practice. Look at the Wikipedia; for an example.

Comment by khinsen — May 10, 2010 @ 9:34 am

PRNG’s seed is also an input, so PRNGs are deterministic. If they weren’t, they wouldn’t be pseudo-random, but just random.

The Fibonacci sequence wouldn’t be deterministic when feeded by a single outside-entropy integer?

Am I missing something?

Comment by noobgp — May 5, 2012 @ 12:14 am

PRNG’s seed is also an input, so PRNGs are deterministic. If they weren’t, they wouldn’t be pseudo-random, but just random.

The Fibonacci sequence (with one input, sequence element number) wouldn’t be deterministic when feeded by a single outside-entropy integer?

Am I missing something?

Comment by noobgp — May 5, 2012 @ 12:16 am

What is the Clojure / monads analog of the Pythons:

def fib():

a = b = 1

while True:

yield a

a,b = b,a+b

Comment by RosenK — September 1, 2010 @ 6:39 pm

As a rule of thumb, generators in Python correspond to lazy sequences in Clojure. The closest equivalent to your Python code in Clojure would be

This defines a function fib with a zero-argument and a two-argument body. The zero-argument body just calls the two-argument one with two ones, for initializing the recurrence relation. The interesting part is in the two-argument body. It does exactly the same as your Python generator does.

Comment by khinsen — September 2, 2010 @ 3:41 pm

I’m quite disappointed from your answer because I’m convinced that something similar to the Python’s yield could be implemented in clojure with monads (but I don’t know how and that was the whole point of my question). In this way the clojure’s code would be much more similar to the Python’s approach. I’m not looking in making clojure a python like but I’m interested in how the monad concepts are used in the different languages. I’m quite sure that the fibnacci solution that you gave is probably better that anything with monads – but that was not the point.

To paraphrase I’m looking for some sort of generalization with monads of the above code that will emerge a ‘yield’ in the clojure’s world.

Comment by RosenK — September 3, 2010 @ 9:16 am

I think I understand better now what you are looking for, but I am not sure monads are the right approach. Remember that monads are a technique for combining computational steps that need to take into account some specific effect. The monad is designed to take care of the effect. Which effect (and thus which monad) would you want to have applied to the generation of Fibonacci numbers?

The only monad I can see as of some relevance to your example is the continuation monad. You could write all your computations in continuation-passing style and use the continuation monad to combine them. You could then implement generator-like constructs through continuations. However, this looks way too complicated to be of any practical interest.

Comment by khinsen — September 3, 2010 @ 2:37 pm

Why yield is practical in Python but is not practical in Clojure? Why there is any difference?

Comment by RosenK — September 9, 2010 @ 3:35 pm

Python implements generators (and thus yield) using a lot of magic going on behind the scenes. Generators should be considered a special control structure in Python. Clojure has different functional control structures that cover most practical situations where Python programmers would use generators. I don’t see any reason why it would be impractical to introduce generators into Clojure, but I don’t see any good reason for doing so either.

Comment by khinsen — September 9, 2010 @ 5:07 pm

I too am interested in the monadic implementation of fib. Basically my understanding is that a lazy sequence is a continuation monad where the head is the current result and tail represents the future computation.

F# uses sequence monads to construct IEnumerable. As you pointed out there is some compiler magic that rearranges the statements into monadic expressions (http://blogs.msdn.com/b/dsyme/archive/2007/09/22/some-details-on-f-computation-expressions-aka-monadic-or-workflow-syntax.aspx). In clojure I would assume one could do this same transformation with a macro.

So back to the question of why? Although this seems unnecessary in a clojure where lazy sequences accomplish the same thing. I’m still wondering if lazy-seq may be the bridge between, rather than a replacement for, the monadic representation.

Here is a web application where the author used fill-queue to convert the web server into a sequence generator rather than a event dispatcher. http://infolace.blogspot.com/2009/08/simple-webhooks-with-clojure-and-ring.html.

Using the same technique I could write fib in an imperative style.

(def fib (fill-queue (fn loop [fill] [a 0 b 1] (fill a) (recur b (+ a b)))))

However the above uses another thread which blocks when the queue is full, in some cases this concurrent computation may be desirable, but it probably isn’t desirable to dedicate a thread to each computation.

In F# events can be composed, filtered and mapped just as a sequence of events rather than traditional event handlers (http://codebetter.com/matthewpodwysocki/2009/08/11/first-class-composable-events-in-f/) which makes the code very expressive.

Here is a clojure web framework that uses a continuation monad to preserve the state of a web application http://intensivesystems.net/tutorials/cont_m_web.html. In this case continuations rather than separate threads are used to maintain state. Could this be implemented as a lazy sequence instead? In F# asynchronous workflows are also implemented via monads. Some C# developers are using IEnumerable and linq to develop async workflows (http://tomasp.net/blog/csharp-async.aspx). So I think the answer is yes, but the real question I think is would using a lazy sequence make the code easier or more difficult to understand, write and maintain?

I think applications might be written as async workflows, and instead of using event handlers they pull a sequence of events and maintain state between events in local variables (sate monad). This would have the feel of stdin/stdout in unix, but rather than the workflow blocking on output and requiring a dedicated thread a continuation could be used to externalize the choice of threading model.

Comment by Kurt Harriger — December 21, 2010 @ 5:24 pm

Why do you use (get s key) in set-val not (key s)?

Comment by Jacek Laskowski — September 9, 2010 @ 4:27 pm

I prefer the similarity that (get s key) has to (assoc s key val), that’s all!

Comment by khinsen — September 9, 2010 @ 4:58 pm

Just noticed – shouldn’t

(domonad sequence

[a (range 5)

:when (odd? a)]

(* 2 a))

use sequence-m?

Comment by Jacek Laskowski — October 18, 2010 @ 12:05 pm

Read it again and spot 2 typos:

rest of to the domonad formshould becomerest of the domonad form(‘to’ removed)used that is has itselfshould becomeused that has itself(‘is’ removed)Thanks for the article and the others! They’re very mind-blowing and simply awesome. I wish you’d share when/how you used the library so I’d follow along and understand monads better.

Comment by Jacek Laskowski — October 28, 2010 @ 10:50 am

The typos are fixed. Thanks for the feedback! Unfortunately all my applications are either throwaway code (not very presentable) or somewhat confidential.

Comment by khinsen — October 29, 2010 @ 12:37 pm

I think it should be yeah.

Comment by Justin Heyes-Jones — August 23, 2012 @ 3:00 am

That was supposed to be a reply to comment 8.

Comment by Justin Heyes-Jones — August 23, 2012 @ 3:01 am

[...] monad tutorial for Clojure programmers Part 1, Part 2, Part 3, Part [...]

Pingback by Monad Monad Monad … | 따라쟁이 — September 1, 2012 @ 1:53 pm

I’d like to see an example of some interesting use of m-plus.

Comment by Mark — December 6, 2012 @ 5:43 am

The classic use-case for m-plus is chaining together multiple data sources, e.g. in parsers (different branches of the grammar being explored) or in backtracking in logic programming (as in core.logic).

Comment by khinsen — December 6, 2012 @ 1:39 pm

m-bind, m-result, and m-zero all enable some important aspect of the domonad macro. Does m-plus play any role in domonad? If not, it really seems like the odd man out and it’s not really clear to me why you’d define it as part of the monad. Couldn’t there be more than one valid way to chain together multiple data sources, and if so, shouldn’t the chaining function really be defined for the task at hand, rather than as part of the monad itself?

Comment by Mark — December 6, 2012 @ 3:17 pm

m-plus is indeed different because not every monad has one. The minimal requirement for a monad are m-bind and m-result. The optional add-on package consists of m-zero and m-plus. The reason for defining m-zero and m-plus as part of a monad is that they make sense only if there’s also m-bind and m-result.

There are indeed cases where different useful definitions of m-plus exist for identical m-bind/m-result. The way to handle this in algo.monads is to define different monads. One could try to factor out this aspect, but that would also complicate the monad library. I am not convinced that it would be worth the effort.

Comment by khinsen — December 6, 2012 @ 4:08 pm