Posted on November 24, 2025
This is a summary of the paper Generalising Monads to Arrows by John Hughes.
In Three Points
- Arrows are a generalisation of monads that can model a wider range of computations
- Both monads and arrows provide an excellent set of combinators for code reuse
- Both arrows and monads make excellent interfaces for building libraries
The state of monads
Monads are a popular choice for structuring computations in Haskell. They provide a standard interface for chaining operations together. However, monads are not suitable for all kinds of computations. Arrows are a more general abstraction that can model a wider range of computations.
The paper Generalising Monads to Arrows explores some of the limitations of monads and how arrows can be used to overcome these limitations.
Monads the Good Parts
Why use Monads?
Monads can be used to model a wide range of contexts for computations. They have great backing by the Haskell ecosystem and a large number of useful combinators defined in the standard library. If you can model your application’s effects with a monad you should, here’s why:
- Monads simplify code
- Monads unify design patterns (libraries can interoperate)
- Monads enable code reuse through generic combinators
A monad is a kind of standardised interface to an abstract datatype of ‘program fragments’.
– John Hughes, Generalising Monads to Arrows
For me the most important point is the last one: monads enable code reuse through generic combinators. Here are some examples:
liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
forM_ :: (Traversable t, Monad m) => t a -> (a -> m b) -> m ()Parsing: a problem without a satisfactory Monad solution
A parser could be (simply) modelled as newtype Parser s a = Parser { runParser :: [s] -> [(a, [s])] }.
From this definition we can define an instance for Monad, MonadZero and MonadPlus for our Parser type.
MonadPlus’s (<|>) would allow us to try one parser and if it fails try another. It’s probably one of the most useful operations for parsers.
However this design has a problem: a parser must try the complete first branch before it can even begin to try the second branch. This can lead to excessive backtracking and memory usage.
An input cannot be garbage collected until the entire parse is complete.
This is the type of computation that monads can’t model well. When every computation has to be chained linearly it is hard to express the combination of computations that need to be analysed/optimised based on their input.
Swierstra and Duponcheel’s solution
They proposed creating a parser with a static and dynamic component:
The static portion analyses the input and decides which dynamic parsers to run. The dynamic portion runs the actual parsers. This eliminates backtracking and allows garbage collection of input that is no longer needed.
The problem with a monad m a -> (a -> m b) -> m b is that the second computation can only depend on the result of the first computation. It cannot depend on the input to the first computation. So it cannot be used to express Swierstra and Duponcheel’s solution.
This means we have to make a decision: either we give up on monads and all the great combinators we get for free when we use monads, or we accept the space-leak… or we come up with a more general abstraction.
Arrows
The key insight:
We track the input to the arrow as well as the output. This allows us to analyse the input and optimise computations. In the case of parsers this allows us to define a static and dynamic component to avoid backtracking.
Arrows are parameterised on two parameters: the input type and the output type. In the context of monads we consider m a as a computation delivering a value of type a. In the context of arrows we consider a b c as a computation with input b that delivers a value of c.
The arrow class
This is a simplification of the Arrow class:
class Arrow a where
arr :: (b -> c) -> a b c
(>>>) :: a b c -> a c d -> a b d -- composeThe arr function is the Arrow equivalent of pure/return. It lifts a pure function into the arrow context. The (>>>) operator is arrow composition. It takes two arrows and composes/glues them together.
There are a number of combinators that make arrows more useful:
first :: Arrow a => a b c -> a (b, d) (c, d)
second :: Arrow a => a b c -> a (d, b) (d, c)
(***) :: Arrow a => a b c -> a d e -> a (b, d) (c, e)
(***) f g = first f >>> second g
(&&&) :: Arrow a => a b c -> a b d -> a b (c, d)
(&&&) f g = arr (\x -> (x, x)) >>> (f *** g)
liftA2 :: Arrow a => (b -> c -> d) -> a e b -> a e c -> a e d
liftA2 op f g = (f &&& g) >>> arr (\(b, c) -> b `op` c)Arrows work naturally with tuples because they need to route multiple pieces of data through computations. The tuple (b, d) in first :: a b c -> a (b,d) (c,d) lets us say “transform the first part, leave the second part alone.” This is fundamental to arrow composition.
We can now define a parser as an arrow:
data StaticParser s = SP Bool [s]
newtype DynamicParser s a b = DP ((a, [s]) -> Maybe (b, [s]))
-- now our parser is parametrised on input (a) and output (b) types
data Parser s a b = P (StaticParser s) (DynamicParser s a b)
-- instance implementation is given by Hughes.
instance Arrow (Parser s) whereThe arrow version allows the StaticParser component to analyze the input structure independently, determining which DynamicParser to run without actually parsing the input. This separation of analysis from execution eliminates the space leak.
And now the point: we can define ArrowZero and ArrowPlus for our Parser without a space leak!
Arrow Choice
Another useful class is ArrowChoice. This class gives us support for if/else and case-like expressions with arrows.
This adds support for the following combinators:
left :: a b c -> a (Either b d) (Either c d)
right :: a b c -> a (Either d b) (Either d c)
(+++) :: a b c -> a b' c' -> a (Either b b') (Either c c')
(|||) :: a b d -> a c d -> a (Either b c) dArrowChoice lets arrows branch based on their input. The (|||) combinator says: “if the input is Left b, run the first arrow; if it’s Right c, run the second arrow.” This is how we express conditional logic in arrows.
Arrows and Monads
If we add just one more class to Arrow we can recover Monads from Arrows:
class Arrow a => ArrowApply a where
app :: a (a b c, b) capp applies its first argument (an arrow) to its second argument (the input to the arrow). With this we have recovered
all of the power of Monads. Arrows with ArrowApply are of little interest because they can be expressed as Monads.
Monad transformers and Arrow functors
Just as with Monads we can build complex Arrows by stacking simpler Arrows on top of each other. This is done with Arrow Functors.
newtype MaybeFunctor a b c = MF (a b (Maybe c))
liftMaybe :: Arrow a => a b c -> MaybeFunctor a b c
liftMaybe f = MF (f >>> arr Just)The article also gives types for StateFunctor and a CPSFunctor.
When to use Arrows vs. Monads
This is an addition recommended by AI. I’m not sure it follows from the discussion of Arrows vs Monads in the paper, but here’s my take anyway: It’s probably personal preference. Unless you are solving a problem that requires the extra generalization of Arrows, Monads and Arrows-with-ArrowApply are equivalent in power. Monads have more support in the Haskell ecosystem, and Arrows are more general.
My approach to programming is to prioritise learning over productivity, so if you have a desire to learn more about Arrows, they’re probably a better choice than Monads.
But, here’s a more serious answer:
Use Monads when:
- Your computation is naturally sequential (step-by-step)
- You don’t need static analysis of the computation structure
Use Arrows when:
- You need to analyze computations before running them (like the parser example)
- You need to compose computations in parallel (with
&&&) - You’re building something that benefits from explicit input/output tracking
The tradeoff: Arrows are more general but have less tooling support. If monads work for your problem, they’re probably the pragmatic choice.