Here are a pair of constraints from the Solving Scheduling Problems with Integer Linear Programming memo about fairly distributing rota assignments amongst the available people:

\[ \forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } \forall r \in \mathcal R \text{, } X_p \geqslant A_{tpr} \] \[ \forall p \in \mathcal P \text{, } X_p \leqslant \sum_{t \in \mathcal T} \sum_{r \in \mathcal R} A_{tpr} \]It’s a bit dense, but it only has the necessary information. Now here’s the corresponding Python:

for slot in range(slots): for person in people: for role in roles: problem += is_assigned[person] >= assignments[slot, person, role] for person in people: problem += is_assigned[person] <= pulp.lpSum(assignments[slot, person, role] for slot in range(slots) for role in roles)

That’s a bit more verbose, takes up more space, we’ve got this `pulp.lpSum`

thing, this mysterious `problem`

variable.. I’d prefer to be able to write an ASCII equivalent of the mathematical form, and have the Python generated for me.

The source file for this memo is Literate Haskell, you can load it directly into GHCi. So here’s the necessary ceremony:

{-# LANGUAGE LambdaCase #-} {-# LANGUAGE GADTs #-} import Control.Monad.Trans.Class import Control.Monad.Trans.Except import Control.Monad.Trans.Reader import Control.Monad.Trans.State import Data.Foldable (for_) import Data.List (intercalate, sort) import Data.Maybe (listToMaybe)

Let’s begin!

My main motivation when coming up with the concrete syntax was “it should look like the maths, but be ASCII, and not be LaTeX because that would be a pain”. It should also be concise, in particular it shouldn’t be necessary to specify what type quantifiers range over (that should be inferred).

I’m going to use the basic rota generator described in the other memo as a running example.

An ILP language is, necessarily, not very expressive. So I decided on the following types:

- User-defined types (sets of values)
- Predicates
- Integers
- N-dimensional binary arrays
- N-dimensional integer arraysI’ve not implemented these because the example didn’t need them.

Predicates take parameters and arrays take indices, the types of these will also be modelled. A predicate which takes two integers is a different type to a predicate which takes an integer and a set-value.

We also want to distinguish between “parameter” variables, which the user of the model will supply, and “model” variables, which the model will solve for.

Here’s an example with some comments:

-- Define three new types type TimeSlot, Person, Role -- M is an input of type integer param integer M -- is_leave is an input of type (TimeSlot, Person) -> bool param predicate is_leave(TimeSlot, Person) -- A is a 3D binary array the solver will try to produce model binary A[TimeSlot, Person, Role] -- X is a 1D binary array (or "vector" if you like special-case -- terminology...) the solver will try to produce model binary X[Person]

Now we have our constraints and objective function:

-- In every time slot, each role is assigned to exactly one person forall t, r; sum{p} A[t,p,r] = 1 -- Nobody is assigned multiple roles in the same time slot forall t, p; sum{r} A[t,p,r] <= 1 -- Nobody is assigned a role in a slot they are on leave for forall p, t if is_leave(t,p), r; A[t,p,r] = 0 -- Nobody works too many shifts forall p; sum{t, r} A[t,p,r] <= M -- Assignments are fairly distributed forall t, p, r; X[p] >= A[t,p,r] forall p; X[p] <= sum{t} r A[t,p,r] maximise sum{p} X[p]

Look how concise they are! They don’t reference any types either!

Even though `forall`

and `sum`

are conceptually similar (bring a new variable into scope and do some sort of quantification) I picked different syntax for them because `forall`

introduces multiple actual constraints: one for each value of the user-defined type being quantified over. The `forall`

quantifier is part of the meta-language, the `sum`

quantifier is part of the ILP language.

Let’s talk abstract syntax. I didn’t want to write a parser, so we’ll skip over that. We’re now getting to our first real bit of Haskell code.

In the concrete syntax there’s only one `forall`

and it’s followed by a list of variables to quantify over, and then the constraint. To simplify the implementation, the abstract-syntax-`CForall`

has exactly one variable, and can either be followed by another `CForall`

or a `CCheck`

(the bit like `sum{r} A[t,p,r] <= 1`

).

A `CForall`

also contains an optional predicate restriction, which is expressed as the predicate’s name followed by the list of arguments.

type Name = String data Constraint a = CForall (TypedName a) (Maybe (Name, [Name])) (Constraint a) | CCheck Op (Expression a) (Expression a) deriving Eq data Op = OEq | OLt | OGt | OLEq | OGEq deriving Eq

We’ll talk about the `TypedName`

bit in the next section, but it’s essentially the name of the quantifier variable. Here are some examples, where `E1`

and `E2`

are placeholders for expressions:

**concrete:**`E1 <= E2`

**abstract:**`CCheck OEq E1 E2`

**concrete:**`forall x; E1 = E2`

**abstract:**`CForall (Untyped "x") Nothing (CCheck OEq E1 E2)`

**concrete:**`forall x, y if p(x, y); E1 < E2`

**abstract:**`CForall (Untyped "x") Nothing (CForall (Untyped "y") (Just ("p", ["x", "y"])) (CCheck OLt E1 E2))`

The expression language is a bit richer, there are more forms of expressions than there are constraints:

data Expression a = ESum (TypedName a) (Expression a) | EIndex Name [Name] | EVar Name | EConst Integer | EMul Integer (Expression a) | EAdd (Expression a) (Expression a) deriving Eq

Here are some more examples:

**concrete:**`1 + 2`

**abstract:**`EAdd (EConst 1) (EConst 2)`

**concrete:**`X[y,z]`

**abstract:**`EIndex "X" ["y", "z"]`

**concrete:**`3 * sum{i} X[i]`

**abstract:**`EMul 3 (ESum (Untyped "i") (EIndex "X" ["i"]))`

Reading terms expressed in this abstract syntax would be a bit of a pain, so here’s some pretty-printing:

instance Show (Constraint a) where show (CForall tyname (Just (rname, rargs)) c) = "forall " ++ show tyname ++ " if " ++ rname ++ "(" ++ strings rargs ++ "); " ++ show c show (CForall tyname _ c) = "forall " ++ show tyname ++ "; " ++ show c show (CCheck op expr1 expr2) = show expr1 ++ " " ++ show op ++ " " ++ show expr2 instance Show (Expression a) where show (ESum tyname expr) = "sum{" ++ show tyname ++ "} " ++ show expr show (EIndex name args) = name ++ "[" ++ strings args ++ "]" show (EVar name) = name show (EConst i) = show i show (EMul i expr) = show i ++ " * " ++ show expr show (EAdd expr1 expr2) = show expr1 ++ " + " ++ show expr2 instance Show Op where show OEq = "=" show OLt = "<" show OGt = ">" show OLEq = "<=" show OGEq = ">="

It looks like this:

λ> CForall (Untyped "x") Nothing (CForall (Untyped "y") (Just ("p", ["x", "y"])) (CCheck OLt (EMul 3 (ESum (Untyped "i") (EIndex "X" ["i"]))) (EConst 10))) forall x; forall y if p(x, y); 3 * sum{i} X[i] < 10

The `strings`

helper function used in `CForall`

and `EIndex`

just comma-separates a list of strings:

strings :: [String] -> String strings = intercalate ", "

Previously, I said we would have these types:

- User-defined types
- Predicates
- Integers
- N-dimensional binary arrays
- N-dimensional integer arrays (not actually implemented)

And we also need to distinguish between “parameter” variables and “model” variables. ILP solvers only operate on matrices, so actually what we have are three parameter types:

- User-defined types
- Predicates
- Integers

And two model types:

- N-dimensional binary arrays
- N-dimensional integer arrays

data Ty = ParamCustom Name | ParamInteger | ParamPredicate [Ty] | ModelBinary [Ty] deriving Eq

Remember the `TypedName`

in the constraint and expression abstract syntax? It was used wherever a new name was brought into scope: `CForall`

and `ESum`

. A `TypedName`

is either a `Name`

by itself or a `Name`

associated with a `Ty`

:

data IsTyped data IsUntyped data TypedName a where Untyped :: Name -> TypedName IsUntyped Typed :: Name -> Ty -> TypedName IsTyped instance Eq (TypedName a) where Untyped n1 == Untyped n2 = n1 == n2 Typed n1 ty1 == Typed n2 ty2 = n1 == n2 && ty1 == ty2

When generating code, we’ll need to know which types are being quantified over. So the type checker will fill in the types as it goes, turning our *untyped* expressions and constraints into *typed* expressions and constraints.

type UntypedConstraint = Constraint IsUntyped type UntypedExpression = Expression IsUntyped type TypedConstraint = Constraint IsTyped type TypedExpression = Expression IsTyped

And let’s add some pretty-printing for types too:

instance Show Ty where show (ParamCustom name) = "param<" ++ name ++ ">" show ParamInteger = "param<integer>" show (ParamPredicate args) = "param<predicate(" ++ strings (map show args) ++ ")>" show (ModelBinary args) = "model<binary[" ++ strings (map show args) ++ "]>" instance Show (TypedName a) where show (Untyped name) = name show (Typed name ty) = show ty ++ " " ++ name

Our running example is the set of basic rota constraints from the other memo. We’ve already seen the concrete syntax, here’s the abstract syntax:

type Binding = (Name, Ty) globals :: [Binding] globals = [ ("M", ParamInteger) , ("is_leave", ParamPredicate [ParamCustom "TimeSlot", ParamCustom "Person"]) , ("A", ModelBinary [ParamCustom "TimeSlot", ParamCustom "Person", ParamCustom "Role"]) , ("X", ModelBinary [ParamCustom "Person"]) ] constraints :: [UntypedConstraint] constraints = [ -- In every time slot, each role is assigned to exactly one person CForall (Untyped "t") Nothing (CForall (Untyped "r") Nothing (CCheck OEq (ESum (Untyped "p") (EIndex "A" ["t", "p", "r"])) (EConst 1))) -- Nobody is assigned multiple roles in the same time slot , CForall (Untyped "t") Nothing (CForall (Untyped "p") Nothing (CCheck OLEq (ESum (Untyped "r") (EIndex "A" ["t", "p", "r"])) (EConst 1))) -- Nobody is assigned a role in a slot they are on leave for , CForall (Untyped "p") Nothing (CForall (Untyped "t") (Just ("is_leave", ["t", "p"])) (CForall (Untyped "r") Nothing (CCheck OEq (EIndex "A" ["t", "p", "r"]) (EConst 0)))) -- Nobody works too many shifts , CForall (Untyped "p") Nothing (CCheck OLEq (ESum (Untyped "t") (ESum (Untyped "r") (EIndex "A" ["t", "p", "r"]))) (EVar "M")) -- Assignments are fairly distributed , CForall (Untyped "t") Nothing (CForall (Untyped "p") Nothing (CForall (Untyped "r") Nothing (CCheck OGEq (EIndex "X" ["p"]) (EIndex "A" ["t", "p", "r"])))) , CForall (Untyped "p") Nothing (CCheck OLEq (EIndex "X" ["p"]) (ESum (Untyped "t") (ESum (Untyped "r") (EIndex "A" ["t", "p", "r"])))) ]

That’s pretty verbose, more than the Python! Good thing that I’d write a parser for this if I were doing it for real.

This is the hairy bit of the memo. I’ve not gone for any particular type inference algorithm, I just went for the straightforward way to do it for the syntax and types I had.

We’ll use a monad stack for the type checker:

type TcFun = ReaderT [Binding] (StateT [Binding] (Except String)) -- environment ^^^^^^^^^ -- unresolved free variables ^^^^^^^^^ -- error message ^^^^^^

To get a feel for how `TcFun`

is useful, let’s go through some utility functions.

**Type errors:**

typeError :: String -> TcFun a typeError = lift . lift . throwE eExpected :: String -> Name -> Maybe Ty -> TcFun a eExpected eTy name (Just aTy) = typeError $ "Expected " ++ eTy ++ " variable, but '" ++ name ++ "' is " ++ show aTy ++ " variable." eExpected eTy name Nothing = typeError $ "Expected " ++ eTy ++ " variable, but could not infer a type for '" ++ name ++ "'."

Throwing a type error is pretty important, so we’ll need a function for that, and for one of the more common errors.

**Looking up the type of a name (if it’s bound):**

getTy :: Name -> TcFun (Maybe Ty) getTy name = lookup name <$> ask

The bindings in the state are just to keep track of free variables, and are not used when checking something’s type.

**Running a subcomputation with a name removed from the environment:**

withoutBinding :: Name -> TcFun a -> TcFun a withoutBinding name = withReaderT (remove name)

For example, if we have a global `x`

and a constraint `forall x; A[x]`

, the `x`

in `A[x]`

is not the global `x`

; it’s the `x`

bound by the `forall`

. Don’t worry, it’s only removed while typechecking the body of the `CForall`

or `ESum`

which introduced the new binding.

**Asserting a variable has a type:**

assertType :: Name -> Ty -> TcFun () assertType name eTy = getTy name >>= \case Just aTy | eTy == aTy -> pure () | otherwise -> eExpected (show eTy) name (Just aTy) Nothing -> lift $ modify ((name, eTy):)

Takes a name and an expected type, and checks that any pre-existing binding matches. If there is no pre-existing binding, the name is introduced as a free variable.

**Removing a free variable from the state:**

delFree :: Name -> TcFun Ty delFree name = lookup name <$> lift get >>= \case Just ty -> do lift $ modify (remove name) pure ty Nothing -> typeError $ "Could not infer a type for '" ++ name ++ "'."

Looks up the type of a free variable, removes the variable from the state, and returns the type. If we fail to find a type for the variable, it’s unused, which is a type error (as we can’t infer a concrete type).

The basic idea is to walk through the abstract syntax: unify types when they arise; and for `CForall`

and `ESum`

check that the inner constraint (or expression) has a free variable with the right name and type.

**Typechecking argument lists:**

Let’s start with the simplest case: type checking an argument list, which arises in quantifier predicate constraints and `EIndex`

. The function takes the names of the argument variables and their expected types, and checks that the variables do have those types.

typecheckArgList :: [Name] -> [Ty] -> TcFun ()

The recursive case takes the name of the current argument and its expected type. It then looks up the actual type of the name. If it has a type, check that it’s the same as the expected type and either move onto the next parameter or throw an error. If there is no binding, `assertType`

records it as a free variable.

typecheckArgList (name:ns) (expectedTy:ts) = do assertType name expectedTy typecheckArgList ns ts

Ultimately the `typecheckExpression`

and `typecheckConstraint`

functions we’ll get to later will make sure all these free variables are bound by a `forall`

or a `sum`

.

The base case is when we run out of argument names or types; there should be the same number of each:

typecheckArgList [] [] = pure () typecheckArgList ns [] = typeError $ "Expected " ++ show (length ns) ++ " fewer arguments." typecheckArgList [] ts = typeError $ "Expected " ++ show (length ts) ++ " more arguments."

**Typechecking expressions:**

Expressions have a few different parts, so let’s go through them one at a time.

typecheckExpression :: UntypedExpression -> TcFun TypedExpression typecheckExpression e0 = decorate e0 (go e0) where

The `decorate`

function, defined further below, appends the pretty-printed expression to any error message. So by `decorate`

ing every recursive call, we get an increasingly wide view of the error. Like this:

Found variable 'x' at incompatible types param<integer> and param<index>. in x in A[x] = x in forall x; A[x] = x

`ESum`

introduces a new binding. The way I’ve handled this is by *unbinding* the name (in case there was something with the same name from a wider scope), type-checking the inner expression, and then (1) asserting that there is a free variable with the name of the bound variable and (2) storing its type.

go (ESum (Untyped name) expr) = do expr' <- withoutBinding name $ typecheckExpression expr delFree name >>= \case ty@(ParamCustom _) -> pure (ESum (Typed name ty) expr') aTy -> eExpected "param<$custom>" name (Just aTy)

`EIndex`

requires checking an argument list. I’m not allowing quantifying over model variables, so in the expression `EIndex name args`

, then `name`

*must* refer to a global. All globals are of known types, so we can look up the type of the argument list from the global environment.

go (EIndex name args) = getTy name >>= \case Just (ModelBinary argtys) -> do typecheckArgList args argtys pure (EIndex name args) aTy -> eExpected "model<binary(_)>" name aTy

`EVar`

uses a variable directly, in which case the variable *must* be an integer. This is handled by looking for a binding and, if there isn’t one, introducing a new free variable.

go (EVar name) = do assertType name ParamInteger pure (EVar name)

`EConst`

, `EMul`

, and `EAdd`

are pretty simple and just involve recursive calls to `typecheckExpression`

.

go (EConst k) = pure (EConst k) go (EMul k expr) = do expr' <- typecheckExpression expr pure (EMul k expr') go (EAdd expr1 expr2) = do expr1' <- typecheckExpression expr1 expr2' <- typecheckExpression expr2 pure (EAdd expr1' expr2')

The input to `typecheckExpression`

is an `UntypedExpression`

and the output is a `TypedExpression`

. We get there by rewriting `ESum`

constructs to contain the inferred type of the quantifier variable. This will be useful when generating code.

**Typechecking constraints:**

Typechecking a constraint is pretty much the same as typechecking an expression. `CForall`

is like `ESum`

, `CCheck`

is like `EAdd`

. The only new thing is that a `CForall`

can have a predicate constraint… but that’s typechecked in the same way as an `EIndex`

: get the argument types of the predicate from the environment, and check that against the argument variables.

Here it is:

typecheckConstraint :: UntypedConstraint -> TcFun TypedConstraint typecheckConstraint c0 = decorate c0 (go c0) where go (CForall (Untyped name) (Just (rname, rargs)) c) = getTy rname >>= \case Just (ParamPredicate argtys) -> do typecheckArgList rargs argtys c' <- withoutBinding name $ typecheckConstraint c ty <- delFree name pure (CForall (Typed name ty) (Just (rname, rargs)) c') aTy -> eExpected "param<predicate(_)>" rname aTy go (CForall (Untyped name) Nothing c) = do c' <- withoutBinding name $ typecheckConstraint c ty <- delFree name pure (CForall (Typed name ty) Nothing c') go (CCheck op expr1 expr2) = do expr1' <- typecheckExpression expr1 expr2' <- typecheckExpression expr2 pure (CCheck op expr1' expr2')

While `typecheckConstraint`

works, it leaves something to be desired. Here’s a slightly nicer interface:

typecheckConstraint_ :: [Binding] -> UntypedConstraint -> Either String TypedConstraint typecheckConstraint_ env0 c0 = check =<< runExcept (runStateT (runReaderT (typecheckConstraint c0) env0) []) where check (c, []) = Right c check (_, free) = Left ("Unbound free variables: " ++ strings (sort (map fst free)) ++ ".")

This:

- Takes the global bindings as an argument.
- Does away with the
`TcFun`

, it returns a plain`Either`

. - Checks that no free variables leak out.

Some utility functions used above are:

remove :: Eq a => a -> [(a, b)] -> [(a, b)] remove a = filter ((/=a) . fst) decorate :: Show a => a -> TcFun b -> TcFun b decorate e = goR where goR m = ReaderT (goS . runReaderT m) goS m = StateT (goE . runStateT m) goE = withExcept (\err -> err ++ "\n in " ++ show e) where

Here’s a little function to print out the inferred type, or type error, for all of our constraints from the running example:

demoTypeInference :: IO () demoTypeInference = for_ constraints $ \constraint -> do case typecheckConstraint_ globals constraint of Right c' -> print c' Left err -> putStrLn err putStrLn ""

Behold!

λ> demoTypeInference forall param<TimeSlot> t; forall param<Role> r; sum{param<Person> p} A[t, p, r] = 1 forall param<TimeSlot> t; forall param<Person> p; sum{param<Role> r} A[t, p, r] <= 1 forall param<Person> p; forall param<TimeSlot> t if is_leave(t, p); forall param<Role> r; A[t, p, r] = 0 forall param<Person> p; sum{param<TimeSlot> t} sum{param<Role> r} A[t, p, r] <= M forall param<TimeSlot> t; forall param<Person> p; forall param<Role> r; X[p] >= A[t, p, r] forall param<Person> p; X[p] <= sum{param<TimeSlot> t} sum{param<Role> r} A[t, p, r]

Looks pretty good, all types are inferred as they should be.

Here’s a broken example, which arose when I mistyped one of the constraints:

λ> either putStrLn print $ typecheckConstraint_ globals (CForall (Untyped "t") Nothing (CForall (Untyped "p") Nothing (CForall (Untyped "r") Nothing (CCheck OLEq (EIndex "X" ["p"]) (ESum (Untyped "t") (ESum (Untyped "r") (EIndex "A" ["t", "p", "r"]))))))) Could not infer a type for 'r'. in forall r; X[p] <= sum{t} sum{r} A[t, p, r] in forall p; forall r; X[p] <= sum{t} sum{r} A[t, p, r] in forall t; forall p; forall r; X[p] <= sum{t} sum{r} A[t, p, r]

I’d added extra `forall t`

and `forall r`

quantifiers, which are wrong because those variables are bound by `sum`

s. So the types of the `forall`

-bound variables can’t be inferred.

I don’t want to write (or learn) bindings to ILP solvers, I already know PuLP so that sounds like a pain. So what I do want to do is generate the PuLP-using Python code the abstract syntax corresponds to.

Most of `codegenExpression`

, which produces a Python expression, is straightforward:

codegenExpression :: TypedExpression -> String codegenExpression (EIndex name args) = name ++ "[" ++ strings args ++ "]" codegenExpression (EVar name) = name codegenExpression (EConst i) = show i codegenExpression (EMul i expr) = show i ++ " * " ++ codegenExpression expr codegenExpression (EAdd expr1 expr2) = "(" ++ codegenExpression expr1 ++ " + " ++ codegenExpression expr2 ++ ")"

The complex bit is handling `ESum`

, which introduces a generator expression, and multiple `ESum`

s are collapsed:

codegenExpression (ESum tyname0 expr0) = go [tyname0] expr0 where go vars (ESum tyname expr) = go (tyname:vars) expr go vars e = "pulp.lpSum(" ++ codegenExpression e ++ " " ++ go' (reverse vars) ++ ")" go' [] = "" go' (Typed name (ParamCustom ty):vs) = let code = "for " ++ name ++ " in " ++ ty in if null vs then code else code ++ " " ++ go' vs

I’m making some assumptions about how variables and types are represented in Python:

I assume all names are the valid in Python, eg:

**abstract:**`EMul 3 (EIndex "A", ["i"])`

**code:**`3 * A[i]`

I assume user-defined types correspond to Python iterators, eg:

**abstract:**`ESum (Typed "x" (ParamCustom "X")) expr`

**code:**`pulp.lpSum(expr for x in X)`

.

These aren’t checked. Assumption (1) could be handled by restricting the characters in names (eg, to alphanumeric only). Assumption (2) would be handled if I were to implement the full abstract syntax, as generated code would be put in a function which takes all the parameter variables as arguments, and which creates the model variables. But this memo only implements expressions and constraints.

Generating code for constraints is nothing surprising, the only slight complication is needing to make sure the indentation works out when there are nested `CForall`

s:

codegenConstraint :: TypedConstraint -> String codegenConstraint = unlines . go where go (CForall (Typed name (ParamCustom ty)) (Just (rname, rargs)) c) = [ "for " ++ name ++ " in " ++ ty ++ ":" , " if not " ++ rname ++ "(" ++ strings rargs ++ "):" , " continue" ] ++ indent (go c) go (CForall (Typed name (ParamCustom ty)) _ c) = [ "for " ++ name ++ " in " ++ ty ++ ":" ] ++ indent (go c) go (CCheck op expr1 expr2) = let e1 = codegenExpression expr1 e2 = codegenExpression expr2 in ["problem += " ++ e1 ++ " " ++ cgOp op ++ " " ++ e2] cgOp OEq = "==" cgOp op = show op indent = map (" "++)

Here’s a little function to print out the generated code, or type error, for all of our constraints from the running example:

demoCodeGen :: IO () demoCodeGen = for_ constraints $ \constraint -> do case typecheckConstraint_ globals constraint of Right c' -> do putStrLn (" # " ++ show c') putStrLn (codegenConstraint c') Left err -> do putStrLn err putStrLn ""

Behold, again!

λ> demoCodeGen # forall param<TimeSlot> t; forall param<Role> r; sum{param<Person> p} A[t, p, r] = 1 for t in TimeSlot: for r in Role: problem += pulp.lpSum(A[t, p, r] for p in Person) == 1 # forall param<TimeSlot> t; forall param<Person> p; sum{param<Role> r} A[t, p, r] <= 1 for t in TimeSlot: for p in Person: problem += pulp.lpSum(A[t, p, r] for r in Role) <= 1 # forall param<Person> p; forall param<TimeSlot> t if is_leave(t, p); forall param<Role> r; A[t, p, r] = 0 for p in Person: for t in TimeSlot: if not is_leave(t, p): continue for r in Role: problem += A[t, p, r] == 0 # forall param<Person> p; sum{param<TimeSlot> t} sum{param<Role> r} A[t, p, r] <= M for p in Person: problem += pulp.lpSum(A[t, p, r] for t in TimeSlot for r in Role) <= M # forall param<TimeSlot> t; forall param<Person> p; forall param<Role> r; X[p] >= A[t, p, r] for t in TimeSlot: for p in Person: for r in Role: problem += X[p] >= A[t, p, r] # forall param<Person> p; X[p] <= sum{param<TimeSlot> t} sum{param<Role> r} A[t, p, r] for p in Person: problem += X[p] <= pulp.lpSum(A[t, p, r] for t in TimeSlot for r in Role)

We’ve come to the end of my little language for defining ILP problems, but there is still more to be done if this were to become a fully-fledged language people could use. Here are some missing bits:

- A parser for the concrete syntax.
- More integer operations:
- Integer ranges, in addition to user-defined set types, for
`forall`

and`sum`

. - Arithmetic on integer indices.
- Comparisons, in addition to predicate functions, in
`forall`

guards.

- Integer ranges, in addition to user-defined set types, for
- The rest of the abstract syntax (along with typechecking and code generation): integer matrices, objective functions, and type and variable declarations.

For example, in the GOV.UK support rota, one of the constraints is that someone can’t be on support in two adjacent weeks. With integer ranges and arithmetic on integer indices, that could be expressed like so:

forall t in [1, N), p; (sum{r} A[t, p, r]) + (sum{r} A[t - 1, p, r]) <= 1

There’s also a small problem with the current abstract syntax: it’s a bit too flexible. This is not a valid ILP expression:

sum{foo} (sum{bar} A[foo, bar] + sum{baz} B[foo, baz])

Only direct `sum`

nesting is permitted. There are two ways to solve this. One is to change the abstract syntax to preclude it, maybe something like this:

data Void data TaggedExpression tag a = TESum !tag (SumExpression a) | TEIndex Name [Name] | TEVar Name | TEConst Integer | TEMul Integer (TaggedExpression tag a) | TEAdd (TaggedExpression tag a) (TaggedExpression tag a) data SumExpression a = SENest (TypedName a) (SumExpression a) | SEBreak (TaggedExpression Void a)

A `TaggedExpression Void`

can’t contain any more `TESum`

constructors, because the `Void`

type is uninhabited. Another option is to add a check, between parsing and typechecking, that there are no invalidly nested `ESum`

s.

The main reasons I switched away from hakyll are:

- Haskell compile times are really bad.
- The templating system is terrible, and I’ve never liked it.
- It’s very opinionated, and trying to do things in a different way is difficult.
- The API doesn’t expose everything you might want, I’ve found myself copying and pasting library code so I could make minor tweaks

I considered switching to another static site generator like jekyll or hugo, which would have solved my problems with hakyll… and inevitably introduced their own problems. The thing is, static site generators *have* to be opinionated. If they’re not, they can’t automate things like generating sitemaps, RSS feeds, blog archives, etc. All of the features you want from a static site generator only work because the generator imposes some restrictions on the way you do things.

Eventually you’ll want to do something which the tool makes awkward.

So I decided to write my own tools, but I definitely *didn’t* want to write yet another static site generator, as I’d be introducing exactly the same problems! Instead, I’ve written two bespoke scripts, one to generate memo.barrucadu.co.uk and one to generate www.barrucadu.co.uk. They have many similarities, because I did memo.barrucadu.co.uk first and then modified it to get www.barrucadu.co.uk, but they’re two separate codebases and I could totally change how rendering pages works (say) in one without affecting the other. This is some of the flexibility you lose when you use a pre-existing static site generator.

I adopted some general principles based on my experiences with hakyll:

- Haskell compile times are bad, so write it in Python.
- I want a powerful templating system, so use jinja2.
- A lot of hakyll’s opinions come from how it handles incremental builds. I don’t need incremental builds, as I regenerate the entire website on every deploy, so I can do away with all that complexity.

I also had some wants:

- I didn’t want any URLs for pre-existing pages to change.
- I still wanted to use pandoc for rendering.
- I wanted to add some features hakyll had put me off investigating: sitemaps; automatically determined “first published” dates for memos (from git history); and templating the horrible mass of html that was barrucadu.co.uk/index.html.

This page you’re reading has been rendered by the new script. It’s a little bit longer than the old script, and more verbose in parts, but it’s got no opinions other than those I chose. I’ve also cut the deploy time down from half an hour to five minutes. Because it doesn’t do any incremental building, it’s a bit slower to generate the site (after the initial compile) than the old script was, but it’s still under a minute.

Maybe I’ll experiment more now that it’s no longer a huge pain.

]]>This is binary search, implemented in Python:

def binary_search(haystack, needle): lo = 0 hi = len(haystack) - 1 found = None iterations = 0 while lo <= hi and found is None: iterations += 1 mid = lo + (hi - lo) // 2 if haystack[mid] == needle: found = mid elif haystack[mid] > needle: hi = mid - 1 else: lo = mid + 1 return (found, iterations)

It returns both the index of the found element and the number of iterations, for reasons which will become apparent in section 3.

How do we know it’s right? Well, let’s test it. I decided to do this with Hypothesis, a property-based testing tool. Here’s a property that an element in the list is found by `binary_search`

:

from hypothesis import given from hypothesis.strategies import lists, integers @given( haystack=lists(integers(), min_size=1), index=integers() ) def test_needle_in_haystack(haystack, index): haystack.sort() needle = haystack[index % len(haystack)] found_index, _ = binary_search(haystack, needle) assert found_index >= 0 assert found_index < len(haystack) assert haystack[found_index] == needle

Given a sorted nonempty list of integers, and an index into that list, the element at that position should be found by `binary_search`

.

We should also test the other case, elements *not* in the list shouldn’t have an index returned:

@given( haystack=lists(integers()), needle=integers() ) def test_needle_might_be_in_haystack(haystack, needle): haystack.sort() found_index, _ = binary_search(haystack, needle) if needle in haystack: assert found_index >= 0 assert found_index < len(haystack) assert haystack[found_index] == needle else: assert found_index is None

Binary search is pretty good, but I found myself wondering one day while doing Advent of Code if we could do better by not splitting the search space in the middle, but biasing our split by assuming the data is distributed linearly. After all, if you look in a dictionary for “binary” you don’t start by opening it to “M”.

This is interpolation search, it’s like binary search, but different:

def interpolation_search(haystack, needle): lo = 0 hi = len(haystack) - 1 found = None iterations = 0 while lo <= hi and found is None: iterations += 1 if needle < haystack[lo] or needle > haystack[hi]: # a new special case break elif haystack[lo] == haystack[hi]: # a new special case if needle == haystack[lo]: found = lo else: break else: # a more complex calculation mid = lo + int((((hi - lo) / (haystack[hi] - haystack[lo])) * (needle - haystack[lo]))) if haystack[mid] == needle: found = mid elif haystack[mid] > needle: hi = mid - 1 else: lo = mid + 1 return (found, iterations)

It’s a bit more complex, we’ve got two new special cases: one for if the needle is not in the haystack at all, and one for if all the elements in the haystack are equal. We’ve also got a more complex `mid`

calculation, trying to figure out where in the haystack the needle will appear.

We can use Hypothesis to compare our two search functions against each other:

@given( haystack=lists(integers()), needle=integers() ) def test_interpolation_equiv_binary(haystack, needle): haystack.sort() found_index_b, _ = binary_search(haystack, needle) found_index_i, _ = interpolation_search(haystack, needle) if found_index_b is None: assert found_index_i is None else: assert found_index_i is not None assert haystack[found_index_b] == haystack[found_index_i]

This is a common trick with property-based testing (and lots of types of testing, really): implement a simpler version of your thing and test that the more complex “real” implementation behaves the same as the simpler “test” implementation.

I intentionally didn’t do this:

@given( haystack=lists(integers()), needle=integers() ) def test_interpolation_equal_binary(haystack, needle): haystack.sort() found_index_b, _ = binary_search(haystack, needle) found_index_i, _ = interpolation_search(haystack, needle) assert found_index_b == found_index_i

Because the functions can differ if the needle is present in the haystack multiple times (eg, looking for `0`

in `[0,0,1]`

), and that’s fine.

Given our fancy midpoint calculation, the interpolation search *must* be better than (ie, do no more iterations than) binary search, right?

@given( haystack=lists(integers(), min_size=1), index=integers() ) def test_interpolation_beats_binary(haystack, index): haystack.sort() needle = haystack[index % len(haystack)] _, iterations_b = binary_search(haystack, needle) _, iterations_i = interpolation_search(haystack, needle) assert iterations_i <= iterations_b

Wrong.

==================================== FAILURES =================================== ________________________ test_interpolation_beats_binary ________________________ @given( > haystack=lists(integers(), min_size=1), index=integers() ) def test_interpolation_beats_binary(haystack, index): interpolation-search.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ haystack = [0, 1, 3], index = 64 @given( haystack=lists(integers(), min_size=1), index=integers() ) def test_interpolation_beats_binary(haystack, index): haystack.sort() needle = haystack[index % len(haystack)] _, iterations_b = binary_search(haystack, needle) _, iterations_i = interpolation_search(haystack, needle) > assert iterations_i <= iterations_b E assert 2 <= 1 interpolation-search.py:111: AssertionError ---------------------------------- Hypothesis ----------------------------------- Falsifying example: test_interpolation_beats_binary(haystack=[0, 1, 3], index=64) ====================== 1 failed, 3 passed in 0.34 seconds =======================

We have a counterexample where binary search wins: with the list `[0, 1, 3]`

and the index 64 (which gives a `needle`

of 1), binary search finds it in 1 iteration but interpolation search takes 2.

Let’s step through that example:

iteration | binary search | interpolation search | ||||||
---|---|---|---|---|---|---|---|---|

lo | hi | mid | found | lo | hi | mid | found | |

0 | 0 | 2 | False | 0 | 2 | False | ||

1 | 0 | 2 | 1 | True | 0 | 2 | 0 | False |

2 | 0 | 1 | 1 | True |

In iteration 1, the binary search picks the middle element, which is the right answer. But the interpolation search doesn’t. It’s thrown off by the assumption we’ve made in the `mid`

calculation: that the values will be linearly distributed. If they’re not, the biasing of the interpolation search towards one end of the search space will work against us.

Sadly, my idle thought about a biased search hasn’t revolutionised computer science. Better luck next time.

]]>- awsfiles has my AWS infrastructure
- dotfiles has my user-level configuration
- nixfiles has my system-level configuration

So really I just need to back up my data and those git repositories.

I store my backups in S3, and move them to the lower-cost (but harder-to-access) Glacier storage after 64 days. I use terraform to provision all my AWS stuff, including this backup location:

resource "aws_s3_bucket" "backup" { bucket = "barrucadu-backups" acl = "private" versioning { enabled = true } lifecycle_rule { id = "archive" enabled = true transition { days = 32 storage_class = "STANDARD_IA" } transition { days = 64 storage_class = "GLACIER" } } }

There’s also an IAM policy granting access to the bucket:

resource "aws_iam_policy" "tool_duplicity" { policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "${aws_s3_bucket.backup.arn}", "${aws_s3_bucket.backup.arn}/*" ] } ] } EOF }

This is the minimal set of permissions to run duplicity, I think. The bucket itself is versioned, but I don’t grant the backup user any versioning-related permissions (eg, they can’t delete an old version of a file). This is so that if the credentials for the backup user get leaked somehow, and someone deletes or overwrites my backups, I can recover them. The backups are encrypted, so someone downloading them is only a small concern.

Because I don’t take full filesystem backups I have two parts to my backup scripts. The main script:

- Checks for a host-specific backup script (not all hosts take backups)
- Creates a temporary directory for the backup to be generated in
- Runs the host-specific script
- Uses duplicity to generate a full or incremental backup, targetting the S3 bucket

It looks like this:

#!/bin/sh set -e # location of scripts BACKUP_SCRIPT_DIR=$HOME/backup-scripts # hostname MY_HOST=`hostname` # aws config AWS_S3_BUCKET="barrucadu-backups" BACKUP_TYPE=$1 if [[ -z "$BACKUP_TYPE" ]]; then echo 'specify a backup type!' exit 1 fi if [[ -x "${BACKUP_SCRIPT_DIR}/host-scripts/${MY_HOST}" ]]; then DIR=`mktemp -d` trap "rm -rf $DIR" EXIT cd $DIR # generates a backup in ./$MY_HOST time $BACKUP_SCRIPT_DIR/host-scripts/$MY_HOST time $BACKUP_SCRIPT_DIR/duplicity.sh \ $BACKUP_TYPE \ $MY_HOST \ "s3+http://${AWS_S3_BUCKET}/${MY_HOST}" else echo 'nothing to do!' fi

The `duplicity.sh`

script sets some environment variables and common parameters:

#!/bin/sh set -e # location of scripts BACKUP_SCRIPT_DIR=$HOME/backup-scripts # aws config AWS_PROFILE="backup" if [[ ! -e $BACKUP_SCRIPT_DIR/passphrase.sh ]]; then echo 'missing passphrase file!' exit 1 fi source $BACKUP_SCRIPT_DIR/passphrase.sh export AWS_PROFILE=$AWS_PROFILE export PASSPHRASE=$PASSPHRASE nix run nixpkgs.duplicity -c \ duplicity \ --s3-european-buckets \ --s3-use-multiprocessing \ --s3-use-new-style \ --verbosity notice \ "$@"

Duplicity’s incremental backups are based on hashing chunks of files, so it can take incremental backups even though all the file modification times will have changed (because the backup is generated anew every time) since the last full backup.

The backups are encrypted with a 512-character password (the `PASSPHRASE`

environment variable in `duplicity.sh`

). The same password is used for all the backups, and each machine which takes backups has a copy of the password. The backups are useless if I lose the password, but for that to happen, I’d have to lose:

- Both of my home computers, in London
- A VPS, on a physical server in Nuremberg
- A dedicated server, in France somewhere

That seems pretty unlikely. Even if it does happen, any event (or sequence of events) which takes out those three locations in quick succession would probably give me big enough problems that not having a backup of my git repositories is a small concern—it could also take out my backups themselves, which are in Ireland.

These aren’t terribly interesting, or useful to anyone other than me, so I’ll just give an example rather than go through each one.

The script for dunwich, my VPS, backs up:

- All my public github repositories (I don’t have any private ones)
- All my self-hosted repositories (which are all private)
- My syncthing directory

It looks like this:

#! /usr/bin/env nix-shell #! nix-shell -i bash -p jq # I have no private github repos, and under 100 public ones; so this # use of the public API is fine. function clone_public_github_repos() { curl 'https://api.github.com/users/barrucadu/repos?per_page=100' 2>/dev/null | \ jq -r '.[].clone_url' | \ while read url; do git clone --bare "$url" done } function clone_all_dunwich_repos() { for dir in /srv/git/repositories/*.git; do url="git@dunwich.barrucadu.co.uk:$(basename $dir)" git clone --bare "$url" done } set -e [[ -d dunwich ]] && rm -rf dunwich mkdir dunwich cd dunwich cp -a $HOME/s syncthing mkdir git mkdir git/dunwich mkdir git/github.com pushd git/dunwich clone_all_dunwich_repos popd pushd git/github.com clone_public_github_repos popd

The script creates the backup inside an `dunwich`

directory: all the host-specific scripts generate their backup in a folder named after the host. This was useful in an earlier incarnation of my backup scripts, but isn’t really necessary now.

I run a full backup monthly, at midnight on the 1st. I run an incremental backup at 4am every Monday. The difference in times is to avoid overlap if the first of the month is a Monday (and I didn’t want to faff around with lock files).

The backups are taken by two systemd services which are defined in my NixOS configuration:

############################################################################# ## Backups ############################################################################# systemd.timers.backup-scripts-full = { wantedBy = [ "timers.target" ]; timerConfig = { OnCalendar = config.services.backup-scripts.OnCalendarFull; }; }; systemd.timers.backup-scripts-incr = { wantedBy = [ "timers.target" ]; timerConfig = { OnCalendar = config.services.backup-scripts.OnCalendarIncr; }; }; systemd.services.backup-scripts-full = { description = "Take a full backup"; serviceConfig.WorkingDirectory = config.services.backup-scripts.WorkingDirectory; serviceConfig.ExecStart = "${pkgs.zsh}/bin/zsh --login -c './backup.sh full'"; serviceConfig.User = config.services.backup-scripts.User; serviceConfig.Group = config.services.backup-scripts.Group; }; systemd.services.backup-scripts-incr = { description = "Take an incremental backup"; serviceConfig.WorkingDirectory = config.services.backup-scripts.WorkingDirectory; serviceConfig.ExecStart = "${pkgs.zsh}/bin/zsh --login -c './backup.sh incr'"; serviceConfig.User = config.services.backup-scripts.User; serviceConfig.Group = config.services.backup-scripts.Group; };

The working directory, user, group, and frequencies are all configurable—but so far no host overrides them. I thought about having a separate backup user, but decided that it didn’t gain any security but cost some convenience (as everything I want to back up is owned by my user anyway).

]]>I’m pleased to announce a new super-major release of dejafu, a library for testing concurrent Haskell programs.

While there are breaking changes, common use-cases shouldn’t be affected too significantly (or not at all). There is a brief guide to the changes, and how to migrate if necessary, on the website.

dejafu is a unit-testing library for concurrent Haskell programs. Tests are deterministic, and work by systematically exploring the possible schedules of your concurrency-using test case, allowing you to confidently check your threaded code.

HUnit and Tasty bindings are available.

dejafu requires your test case to be written against the `MonadConc`

typeclass from the concurrency package. This is a necessity, dejafu cannot peek inside your `IO`

or `STM`

actions, so it needs to be able to plug in an alternative implementation of the concurrency primitives for testing. There is some guidance for how to switch from `IO`

code to `MonadConc`

code on the website.

If you really need `IO`

, you can use `MonadIO`

- but make sure it’s deterministic enough to not invalidate your tests!

Here’s a small example reproducing a deadlock found in an earlier version of the auto-update library:

> :{ autocheck $ do auto <- mkAutoUpdate defaultUpdateSettings auto :} [fail] Successful [deadlock] S0--------S1-----------S0- [fail] Deterministic [deadlock] S0--------S1-----------S0- () S0--------S1--------p0--

dejafu finds the deadlock, and gives a simplified execution trace for each distinct result. More in-depth traces showing exactly what each thread did are also available. This is using a version of auto-update modified to use the `MonadConc`

typeclass. The source is in the dejafu testsuite.

The highlights for this release are setup actions, teardown actions, and invariants:

**Setup actions**are for things which are not really a part of your test case, but which are needed for it (for example, setting up a test distributed system). As dejafu can run a single test case many times, repeating this work can be a significant overhead. By defining this as a setup action, dejafu can “snapshot” the state at the end of the action, and efficiently reload it in subsequent executions of the same test.**Teardown actions**are for things you want to run after your test case completes, in all cases, even if the test deadlocks (for example). As dejafu controls the concurrent execution of the test case, inspecting shared state is possible even if the test case fails to complete.**Invariants**are effect-free atomically-checked conditions over shared state which must always hold. If an invariant throws an exception, the test case is aborted, and any teardown action run.

Here is an example of a setup action with an invariant:

> :{ autocheck $ let setup = do var <- newEmptyMVar registerInvariant $ do value <- inspectMVar var when (value == Just 1) $ throwM Overflow pure var in withSetup setup $ \var -> do fork $ putMVar var 0 fork $ putMVar var 1 tryReadMVar var :} [fail] Successful [invariant failure] S0--P2- [fail] Deterministic [invariant failure] S0--P2- Nothing S0---- Just 0 S0--P1--S0--

In the `[invariant failure]`

case, thread 2 is scheduled, writing the forbidden value “1” to the MVar, which terminates the test.

Here is an example of a setup action with a teardown action:

> :{ autocheck $ let setup = newMVar () teardown var (Right _) = show <$> tryReadMVar var teardown _ (Left e) = pure (show e) in withSetupAndTeardown setup teardown $ \var -> do fork $ takeMVar var takeMVar var :} [pass] Successful [fail] Deterministic "Nothing" S0--- "Deadlock" S0-P1--S0-

The teardown action can perform arbitrary concurrency effects, including inspecting any mutable state returned by the setup action.

Setup and teardown actions were previously available in a slightly different form as the `dontCheck`

and `subconcurrency`

functions, which have been removed (see the migration guide if you used these).

Haskell typeclass instances have two parts: some *constraints*, and the *instance head*:

newtype WrappedFunctor f a = WrappedFunctor (n a) instance Functor f => Functor (WrappedFunctor f) where -- ^^^^^^^^^ constraints -- ^^^^^^^^^^^^^^^^^^^^^^^^^^ head fmap f (WrappedFunctor fa) = WrappedFunctor (fmap f fa)

More specifically, the head is of the form `C (T a1 ... an)`

, where `C`

is the class, `T`

is a type constructor, and `a1 ... an`

are distinct type variables.The `FlexibleInstances`

extension relaxes this restriction a little, allowing some (or all) of the `a1 ... an`

to be arbitrary types, as well as type variables.

When the type checker needs to find an instance, it does so purely based on the head, constraints don’t come into it. The instance above means “whenever you use `WrappedFunctor f`

as a functor, *regardless of what f is and even if we don’t know what it is yet*, then you can use this instance”, and a type error will be thrown if whatever concrete type

`f`

is instantiated to doesn’t in fact have a functor instance.You might think that, if we didn’t define the instance above and instead defined this one:

instance Functor (WrappedFunctor Maybe) where fmap f (WrappedFunctor fa) = WrappedFunctor (fmap f fa)

…and then used a `WrappedFunctor f`

as a functor, that the type checker would infer `f`

must be `Maybe`

. This is not so! Typeclass inference happens under an “open world” approach: just because only one instance is known *at this time* doesn’t mean there won’t be a second instance discovered later. Prematurely selecting the instance for `WrappedFunctor Maybe`

could be unsound.

In GHC Haskell, we can express a constraint that two types have to be equal. For example, this is a weird way to check that two values are equal:

-- this requires GADTs or TypeFamilies funnyEq :: (Eq a, a ~ b) => a -> b -> Bool funnyEq = (==)

We only have a constraint `Eq a`

, not `Eq b`

, but because of the `a ~ b`

constraint, the type checker knows that they’re the same type:

> funnyEq 'a' 'b' False > funnyEq True True True > funnyEq True 'b' <interactive>:22:1: error: • Couldn't match type ‘Bool’ with ‘Char’ arising from a use of ‘funnyEq’ • In the expression: funnyEq True 'b' In an equation for ‘it’: it = funnyEq True 'b'

Let’s put the two together now. Let’s throw away the two instances we defined above, and now look at this one:

instance (f ~ Maybe) => Functor (WrappedFunctor f) where fmap f (WrappedFunctor fa) = WrappedFunctor (fmap f fa)

This instance means “whenever you use `WrappedFunctor f`

as a functor, *regardless of what f is and even if we don’t know what it is yet*, then you can use this instance”, and a type error will be thrown if

`f`

cannot be instantiated to `Maybe`

. This is different to the instance `Functor (WrappedFunctor Maybe)`

!If we have

`instance Functor (WrappedFunctor Maybe)`

:> :t fmap (+1) (WrappedFunctor (pure 3)) :: (Num b, Applicative f, Functor (WrappedFunctor f)) => WrappedFunctor f b

If we have

`instance (f ~ Maybe) => Functor (WrappedFunctor f)`

:> :t fmap (+1) (WrappedFunctor (pure 3)) :: Num b => WrappedFunctor Maybe b

With the latter, we get much better type inference. The downside is that this instance overlaps any more concrete instances, so we couldn’t (for example) define an instance for `WrappedFunctor Identity`

as well.

But if you only need one instance, it’s a neat trick.

Here’s a concrete example from the dejafu-2.0.0.0 branch. I’ve introduced a `Program`

type, to model concurrent programs. There’s one sort of `Program`

, a `Program Basic`

, which can be used as a concurrency monad (a `MonadConc`

) directly. The instances are defined like so:

instance (pty ~ Basic, MonadIO n) => MonadIO (Program pty n) where -- ... instance (pty ~ Basic) => MonadTrans (Program pty) where -- ... instance (pty ~ Basic) => MonadCatch (Program pty n) where -- ... instance (pty ~ Basic) => MonadThrow (Program pty n) where -- ... instance (pty ~ Basic) => MonadMask (Program pty n) where -- ... instance (pty ~ Basic, Monad n) => MonadConc (Program pty n) where -- ...

If instead the instances has been defined for `Program Basic n`

, then the type checker would have complained that the `pty`

parameter is (in many cases) polymorphic, and not use these instances. This means every single use of a `Program pty n`

, where `pty`

was not otherwise constrained, would need a type annotation. By instead formulating the instances this way, the type checker *knows* that if you use a `Program pty n`

as a `MonadConc`

, then it must be a `Program Basic n`

.

This has turned a potentially huge breaking change, requiring everyone who uses dejafu to add type annotations to their tests, into something which just works.

]]>You can express a bunch of interesting problems in terms of ILP, and there are solvers which do a pretty good job of finding good solutions quickly. One of those interesting problems is scheduling, and there’s a nice write-up of how PyCon uses an ILP solver to generate schedules.

Another problem is rota generation, which is after all just a sort of scheduling. I have implemented a rota generator for GOV.UK’s technical support, and this memo is about how it works.

What is a rota?

Well, there are a bunch of time slots \(\mathcal T\), roles \(\mathcal R\), and people \(\mathcal P\). We can represent the assignments as a 3D binary matrix:

\[ \begin{split} A_{tpr} = \begin{cases} 1,&\text{ if, in time }t\text{, person }p\text{ is scheduled in role }r\\ 0,&\text{otherwise} \end{cases} \end{split} \]Next we need some constraints on what a valid rota looks like.

For every pair of slots and roles, the sum of the assignments should be 1:

\[ \forall t \in \mathcal T \text{, } \forall r \in \mathcal R \text{, } \sum_{p \in \mathcal P} A_{tpr} = 1 \]For every pair of slots and people, the sum of the assignments should be 0 (if they’re not assigned anything) or 1 (if they are):

\[ \forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } \sum_{r \in \mathcal R} A_{tpr} \in \{0, 1\} \]We might give our people time off (how generous!), so there’s no point in generating a rota where someone gets scheduled during their time off.

Given a function \(leave : \mathcal P \mapsto 2^{\mathcal T}\), which gives the set of slots someone is on leave, then: for every pair of slots and people, all roles should be unassigned if the slot is in \(leave(p)\):

\[ \forall p \in \mathcal P \text{, } \forall t \in leave(p) \text{, } \forall r \in \mathcal R \text{, } A_{tpr} = 0 \]We might also have a maximum number of shifts any one person can be assigned to in a rota.

Given such a limit \(M\), then: for every person, the sum of the assignments across *all* slots should be less than or equal to \(M\):

If all we wanted was constraints, then we could use a SAT solver, and it would probably do a better job than an ILP solver as a SAT solver is *built* for solving boolean constraints! But there’s one thing which is more easily expressible to an ILP solver than a SAT solver: objective functions to optimise.

Given our above constraints, we will get *a* rota, but it might not be very fair. One person might be scheduled ten times, and another not at all. We can encourage the solver to be more fair by providing it with an objective which results in more people being assigned.

First we’ll need an auxiliary variable to check whether someone has been assigned at all:

\[ \begin{split} X_p = \begin{cases} 1,&\text{ if person }p\text{ has any assignments}\\ 0,&\text{otherwise} \end{cases} \end{split} \]We can use two new constraints to set the value of these \(X\) variables:

\[ \forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } \forall r \in \mathcal R \text{, } X_p \geqslant A_{tpr} \] \[ \forall p \in \mathcal P \text{, } X_p \leqslant \sum_{t \in \mathcal T} \sum_{r \in \mathcal R} A_{tpr} \]As both \(A_{tpr}\) and \(X_p\) are binary variables, this means \(X_p\) will be 1 if (first constraint) and only if (second constraint) person \(p\) has any assignments at all.

We then give an objective to the solver:

\[ \textbf{maximise } \sum_{p \in \mathcal P} X_p \]The only way to increase the value of the sum is by assigning roles to more people, so that is what the solver will do.

PuLP is a Python library for interfacing with ILP solvers. It provides a somewhat nicer interface than directly dealing with the matrices and vectors on which ILP solvers operate, letting us express constraints as equations much like I have here.

Here’s how to express the above with PuLP:

import pulp # Parameters slots = 0 people = [] roles = [] leave = {} max_assignments_per_person = 0 # Create the 'problem' problem = pulp.LpProblem("rota generator", sense=pulp.LpMaximize) # Create variables assignments = pulp.LpVariable.dicts("A", ((slot, person, role) for slot in range(slots) for person in people for role in roles), cat="Binary") is_assigned = pulp.LpVariable.dicts("X", people, cat="Binary") # Add constraints for slot in range(slots): for role in roles: # In every time slot, each role is assigned to exactly one person problem += pulp.lpSum(assignments[slot, person, role] for person in people) == 1 for person in people: # Nobody is assigned multiple roles in the same time slot problem += pulp.lpSum(assignments[slot, person, role] for role in roles) <= 1 for person, bad_slots in leave.items(): for slot in bad_slots: for role in roles: # Nobody is assigned a role in a slot they are on leave for problem += assignments[slot, person, role] == 0 for person in people: # Nobody works too many shifts problem += pulp.lpSum(assignments[slot, person, role] for slot in range(slots) for role in roles) <= max_assignments_per_person # Constrain 'is_assigned' auxiliary variable for slot in range(slots): for person in people: for role in roles: # If problem += is_assigned[person] >= assignments[slot, person, role] for person in people: # Only if problem += is_assigned[person] <= pulp.lpSum(assignments[slot, person, role] for slot in range(slots) for role in roles) # Add objective problem += pulp.lpSum(is_assigned[person] for person in people) # Solve with the Coin/Cbc solver problem.solve(pulp.solvers.COIN_CMD()) # Print the solution! for slot in range(slots): print(f"Slot {slot}:") for role in roles: for person in people: if pulp.value(assignments[slot, person, role]) == 1: print(f" {role}: {person}")

The quantifiers have become `for...in`

loops and the summations have become calls to `pulp.lpSum`

with a generator expression iterating over the values of interest, but other than that it’s fairly straightforward.

With the parameters:

slots = 5 people = ["Spongebob", "Squidward", "Mr. Crabs", "Pearl"] roles = ["Fry Cook", "Cashier", "Money Fondler"] leave = {"Mr. Crabs": [0,2,3,4]} max_assignments_per_person = 5

We get the output:

Slot 0: Fry Cook: Pearl Cashier: Squidward Money Fondler: Spongebob Slot 1: Fry Cook: Spongebob Cashier: Mr. Crabs Money Fondler: Pearl Slot 2: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 3: Fry Cook: Spongebob Cashier: Pearl Money Fondler: Squidward Slot 4: Fry Cook: Squidward Cashier: Spongebob Money Fondler: Pearl

If you play around with this you might notice two things:

The rota you get is always the same.

If there is no rota which meets the constraints, you get rubbish out!

This is due to how Cbc works. If you try GLPK, a different solver, you’ll still get a deterministic rota, but if there isn’t one meeting the constraints you’ll (probably) get back an empty rota. Solving ILP in the general case is NP-complete, so solvers use heuristics. Both Cbc and GLPK are deterministic, but they differ in heuristics.

You can check the `problem.status`

to see if it’s solved or not:

if problem.status != pulp.constants.LpStatusOptimal: raise Exception("Unable to solve problem.")

Another way to make the solver go wrong is by having a wide range of values in your problem. I’m not sure why this can cause a problem, but it does.

A simple way to introduce randomisation is to add give the solver a randomly generated objective to maximise. For example, we can assign a score to every possible allocation, and try to maximise the overall score:

import random randomise = pulp.lpSum(random.randint(0, 1) * assignments[slot, person, role] for slot in range(slots) for person in people for role in roles)

As we want the actual objective function to take priority, scale it up:

# Add objective problem += pulp.lpSum(is_assigned[person] for person in people) * 100 + randomise

Now if we run the tool multiple times, we get different rotas:

$ python3 rota.py Slot 0: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 1: Fry Cook: Pearl Cashier: Spongebob Money Fondler: Mr. Crabs Slot 2: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 3: Fry Cook: Squidward Cashier: Spongebob Money Fondler: Pearl Slot 4: Fry Cook: Squidward Cashier: Pearl Money Fondler: Spongebob $ python3 rota.py Slot 0: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 1: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Mr. Crabs Slot 2: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 3: Fry Cook: Pearl Cashier: Squidward Money Fondler: Spongebob Slot 4: Fry Cook: Squidward Cashier: Spongebob Money Fondler: Pearl

The downside to this approach is that we might accidentally generate a random objective which is really hard to maximise, making the solver do a lot of work when all we really want is an arbitrary solution.

The GOV.UK support rota is a bit more complex than the example above. A typical rota runs for 12 weeks, with 1 week being 1 slot, in the above parlance. There are two types of roles, and constraints about who can occupy which roles:

**In-hours support roles:***Primary in-hours*, must have been secondary in-hours at least three times.*Secondary in-hours*, must have been shadow at least two times.*Shadow*, must not have shadowed twice before. This role is optional.

**Out-of-hours support roles:***Primary on-call*, no special requirements.*Secondary on-call*, must have been primary on-call at least three timesThere’s an asymmetry there: the primary in-hours needs to be experienced, but the opposite is the case for on-call roles. This is intentional! If the primary on-call were more experienced, they would resolve every issue themselves and the less experienced one would never get to learn anything.

.

There are separate pools for each type: there are some people who can do in-hours support, some people who can do out-of-hours support, and some people who can do both.

To ensure individuals and teams aren’t over-burdened with support roles, there are some constraints about when people can be scheduled:

- Someone can’t be on in-hours support in two adjacent weeks.
- Two people on in-hours support in the same week (or adjacent weeks) can’t be on the same team.

And there is also a limit on the number of in-hours and out-of-hours roles someone can do across the entire rota.

The objective function is a bit more complex too:

- As above, we want to maximise the number of people on the rota.
- We want to maximise the number of weeks when the secondary in-hours has done it fewer than three times.
- We want to maximise the number of weeks when the primary out-of-hours has done it fewer than three times.
- We want to maximise the number of weeks with a shadow.

I won’t go through all of the constraints, as they’re mostly more of the same, but this is an example of a particularly interesting constraint, as it’s pretty hard to implement.

The logic here is simple, but the language of ILP is very limited: you can’t directly express `if...then`

-style constraints between variables. Now, this is fine if we want to limit the primary in-hours role to people who have been secondary in-hours at least three times *before this rota period*, as we can statically determine that:

But that’s too restrictive. If someone has been secondary in-hours two times before the start of the rota, and is secondary in-hours in one week, they should be able to be primary in-hours in subsequent weeks.

To work around this we’ll need some auxiliary variables.

Firstly, let’s record how many times someone has been a secondary at the start of each slot:

\[ \begin{split} S_{tp} = \begin{cases} \text{the number of times person }p\text{ has been a secondary before the start of this rota},&\text{ if }t = 0\\ S_{t-1,p} + A_{t-1,p,\text{secondary}},&\text{otherwise} \end{cases} \end{split} \]Unlike previous variables we’ve seen, this is not a binary variable. But it is still an integral variable. Translating the above into ILP constraints is straightforward:

\[ \forall p \in \mathcal P \text{, } S_{0,p} = \text{the number of times person }p\text{ (etc)} \] \[ \forall t \geqslant 1 \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } S_{tp} = S_{t-1,p} + A_{t-1,p,\text{secondary}} \]Now we can use a trick I found to encode conditionals in ILP. The trick is to introduce an auxiliary variable, \(D \in \{0,1\}\), and use constraints to ensure that \(D = 0\) when the condition goes one way, and \(D = 1\) when it goes the other.

Here is how we encode `if X > k then Y >= 0 else Y <= 0`

, where `k`

is constant:

Here \(X\) and \(Y\) are the ILP variables from our conditional, \(D\) is the auxiliary variable we introduced, and \(m\) is some large constant, way bigger than the possible maximum values of \(X\) or \(Y\). Let’s walk through this, firstly here’s the case where \(D = 0\):

\[ \begin{align} 0 &\lt X - k \\ 0 &\leqslant Y \\ X - k &\leqslant m \\ Y &\leqslant m \end{align} \]Because \(m\) is a large constant, the bottom two constraints are trivially true, so they can be removed. With a little rearranging, we have:

\[ \begin{align} k &\lt X \\ 0 &\leqslant Y \\ \end{align} \]So if \(D = 0\), \(X\) is strictly greater than \(k\) (the condition is true), and \(Y \geqslant 0\). That’s the true branch sorted!

Now let’s look at the \(D = 1\) branch:

\[ \begin{align} 0 &\lt X - k + m \\ 0 &\leqslant Y + m \\ X - k &\leqslant 0 \\ Y &\leqslant 0 \end{align} \]Because \(m\) is a large constant, this time we can get rid of the first two constraints. With a little rearranging, we get:

\[ \begin{align} X &\leqslant k \\ Y &\leqslant 0 \end{align} \]So if \(D = 1\), \(X\) is not strictly greater than \(k\) and \(Y\) is at most zero. Remember, the real “\(Y\)” we’re using is an \(A_{tpr}\) value, which is a binary value, so the overall effect is to specify that it must be zero. Adding a constraint \(Y \geqslant 0\) would do the same job.

Each conditional needs a fresh \(D\) variable. So adding these conditionals in results in a lot of extra variables and constraints:

\[ \begin{alignat*}{4} &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } & 0 &\lt S_{tp} - 2 + 999 \times D_{tp} \\ &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } &0 &\leqslant A_{tp,\text{primary}} + 999 \times D_{tp} \\ &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } &S_{tp} - 2 &\leqslant 999 \times (1 - D_{tp}) \\ &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } &A_{tp,\text{primary}} &\leqslant 999 \times (1 - D_{tp}) \end{alignat*} \]Here 2 has been substituted for \(k\), as someone needs to have been a secondary at least three times to be a primary; and 999 has been substituted for \(m\), which is larger than the number of secondary shifts someone could actually have done.

Let’s cover one more type of constraint: not over-burdening teams by taking all of their members away to be on support at once. This one is pretty simple, but does require a bit more information about the people, specifically, what team they’re on.

Given a function \(team : \mathcal P \mapsto 2^{\mathcal P}\), which gives the set of people on the same team as another, then: for every pair of slots and people, there should be no overlap in the in-hours assignments if the two people are on the same team:

\[ \forall t \in \mathcal T \text{, } \forall p_1 \in \mathcal P \text{, } \forall p_2 \neq p_1 \in team(p_1) \text{, } \\ \forall r_1 \in \{\text{primary}, \text{secondary}, \text{shadow}\} \text{, } \\ \forall r_2 \in \{\text{primary}, \text{secondary}, \text{shadow}\} \text{, } \\ A_{t,p_1,r_1} + A_{t,p_2,r_2} \leqslant 1 \]My GOV.UK rota generator is on GitHub, and also on Heroku as The Incredible Rota Machine.

I’ve timed it on my laptop by running it repeatedly overnight, and found that the time to generate a rota varies between about 10s and 15m, but the median is about 30s. I expect it’ll be slower on Heroku, though.

It’s already paying off, I saved the person who usually puts together the rota an hour and a half! A new rota is needed every quarter, and it took me three and a half days to make, so it’ll pay for itself in a mere four and a half years!

It was a fun project, and a neat thing to do in firebreak—the one-week “do whatever you want as long as it’s useful” gap we have between quarters—but probably not worth it if you’re looking to save a bit of time.

]]>The C standard bakes in enough details about pointers such that the amount of memory a C program can access (even on a hypothetical infinite-memory machine) is bounded and statically known. Access to an unbounded amount of memory is necessary (but not sufficient) for Turing completeness. Therefore C is not Turing complete.

This is an argument about the

*specification*of C, not any particular*implementation*. The fact that no real machine has unbounded memory is totally irrelevant.This is not a criticism of C.

A friend told me that C isn’t actually Turing-complete due to the semantics of pointers, so I decided to dig through the (C11) spec to find evidence for this claim. The two key bits are 6.2.6.1.4 and 6.5.9.5:

Values stored in non-bit-field objects of any other object type consist of

`n × CHAR_BIT`

bits, where`n`

is the size of an object of that type, in bytes. The value may be copied into an object of type`unsigned char [n]`

(e.g., by`memcpy`

); the resulting set of bytes is called the object representation of the value. Values stored in bit-fields consist of`m`

bits, where`m`

is the size specified for the bit-field. The object representation is the set of`m`

bits the bit-field comprises in the addressable storage unit holding it. Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.

The important bit is the use of the definite article in the first sentence, “where `n`

is **the** size of an object of that type”, this means that all types have a size which is known statically.

Two pointers compare equal if and only if both are null pointers, both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function, both are pointers to one past the last element of the same array object, or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space.

Pointers to distinct objects of the same typeInterestingly, you could have a distinct heap for every type, with overlapping pointer values. And this is totally fine according to the spec! This doesn’t help you, however, because the number of types is finite: they’re specified in the text of the program, which is necessarily finite.

compare unequal. As pointers are fixed in size, this means that there’s only a finite number of them. You can take a pointer to any object “The unary `&`

operator yields the address of its operand.”, first sentence of 6.5.3.2.3.

, therefore there are a finite number of objects that can exist at any one time!

However, C is slightly more interesting than a finite-state machine. We have one more mechanism to store values: the return value of a function! Fortunately, the C spec doesn’t impose a maximum stack depth“Recursive function calls shall be permitted, both directly and indirectly through any chain of other functions.”, 6.5.2.2.11, nothing else is said on the matter.

, and so we can in principle implement a pushdown automata.

Just an interesting bit of information about C, because it’s so common to see statements like “because C is Turing-complete…”. Of course, on a real computer, nothing is Turing-complete, but C doesn’t even manage it in theory.

In a discussion about this on Twitter, the possibility of doing some sort of virtual memory shenanigans to make a pointer see different things depending on its context of use came up. I believe that this is prohibited by the semantics of object lifetimes (6.2.4.2):

The lifetime of an object is the portion of program execution during which storage is guaranteed to be reserved for it. An object exists, has a constant address, and retains its last-stored value throughout its lifetime. If an object is referred to outside of its lifetime, the behavior is undefined. The value of a pointer becomes indeterminate when the object it points to (or just past) reaches the end of its lifetime.

The lifetime for heap-allocated objects is from the allocation until the deallocation (7.22.3.1):

The lifetime of an allocated object extends from the allocation until the deallocation. Each such allocation shall yield a pointer to an object disjoint from any other object.

I had a fun discussion on IRC, where someone argued that the definition of pointer equality does not mention the object representation, therefore the fixed object representation size is irrelevant! Therefore, pointers could have extra information somehow which is not part of the object representation.

It took a while to resolve, but I believe the final sentence of the object representation quote and the first clause of the pointer equality quote, together with the fact that pointers are values, resolves this:

- Pointers are values.
Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.

- Points (1) and (2) mean that pointers with the same object representation compare equal.
Two pointers compare equal if and only if…

- The “only if” in (4) means that if two pointers compare equal, then the rest of the rules apply.
- Points (3) and (5) mean that two pointers with the same object representation compare equal, and therefore point to the same object (or are both null pointers, etc).

This means that there cannot be any further information that what is stored in the object representation.

Interestingly, I believe this forbids something I initially thought to be the case: I say in a footnote that different types could have different heaps. They *could*, but that doesn’t let you use the same object representation for pointers of different types!

Amazon Simple Notification Service (SNS) lets you set up “topics”, subscribe to them through a variety of protocols (including SMS and email), and send a message to a topic by hitting a web endpoint. This seemed the simplest way to get my computer to text me.

You’ll need an AWS account, and you’ll also need to be okay with SNS not being free. Fortunately, unless you’re going to be sending hundreds of notifications, it’s pretty cheap. Then you need to create an SNS topic and add subscribers to it, which you can do through the AWS web interface.

I set up the SNS topic and SMS notifications with Terraform, a tool for provisioning infrastructure. Here’s a self-contained Terraform config for an SNS topic with SMS notifications:

locals { "phone" = "your phone number" "access_key" = "your aws access key" "secret_key" = "your aws secret key" } provider "aws" { access_key = "${locals.access_key}" secret_key = "${locals.secret_key}" region = "eu-west-1" } resource "aws_sns_topic" "topic_name" { name = "topic-name" } resource "aws_sns_topic_subscription" "topic_name_sms" { topic_arn = "${aws_sns_topic.topic_name.arn}" protocol = "sms" endpoint = "${locals.phone}" }

In my actual Terraform configuration, the phone number and keys are in a file which isn’t checked into the repository. Unfortunately Terraform can’t set up email subscriptions, as they need to be manually confirmed. So I had to set that up via the AWS web interface.

You can send test messages through the AWS web interface, so try that to make sure everything is working.

SNS exposes a web endpoint, so the “simplest” way to send a message to your topic would be to `curl`

that. I decided to use the excellent boto3 library for Python instead.

I quickly whipped together this script, which I store in `~/bin/aws-sns`

:

#! /usr/bin/env nix-shell #! nix-shell -i python3 -p python3Packages.boto3 ''' A script to push a message from stdin to a SNS topic. ''' import argparse import boto3 import sys arg_parser = argparse.ArgumentParser(description=__doc__) arg_parser.add_argument( '-t', dest='topic', required=True, help='Topic ARN.') arg_parser.add_argument( '-s', dest='subject', required=True, help='Subject for email.') arg_parser.add_argument( '-R', dest='region', required=False, help='Region to use.', default='eu-west-1') parsed_args = arg_parser.parse_args() # boto3 checks for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env # vars automatically. client = boto3.client('sns', region_name=parsed_args.region) # message body is stdin message = sys.stdin.read() response = client.publish( TopicArn=parsed_args.topic, Subject=parsed_args.subject, Message=message ) print('Message ID: %s' % response['MessageId'])

This is a nix-shell script, which fetches boto3 automatically when invoked. If you’re not a nix user, you’d do the usual virtualenv/source/pip dance.

You’ll need to create a user in the AWS web interface with permissions to poke SNS, and note down their access key and secret key. With those keys, and the ARN of your SNS topic, you should be able to send a message from the command line:

$ export AWS_ACCESS_KEY_ID="foo" $ export AWS_SECRET_ACCESS_KEY="bar" $ echo "Hello, world" | aws-sns -t "baz" -s "Test Message"

Now we have the alerting, so we just need the monitoring. Firstly we need a script to check whatever condition we care about (zpool status in my case), and to call the SNS script if it’s not good.

Here’s a self-contained zpool script:

#!/usr/bin/env bash export AWS_ACCESS_KEY_ID="foo" export AWS_SECRET_ACCESS_KEY="bar" ZFS_TOPIC_ARN="baz" if [[ "`zpool status -x`" != "all pools are healthy" ]]; then zpool status | aws-sns -t "$ZFS_TOPIC_ARN" -s "zfs zpool status" fi

The final piece of the puzzle is a systemd timer (or cronjob, whatever your system uses) to periodically run the script. I have mine run every 12 hours. Here’s the service definition from my NixOS config:

systemd.timers.monitoring-scripts = { wantedBy = [ "timers.target" ]; timerConfig = { OnCalendar = "0/12:00:00"; }; }; systemd.services.monitoring-scripts = { description = "Run monitoring scripts"; serviceConfig.WorkingDirectory = "/home/barrucadu/monitoring-scripts"; serviceConfig.ExecStart = "${pkgs.zsh}/bin/zsh --login -c ./monitor.sh"; serviceConfig.User = "barrucadu"; serviceConfig.Group = "users"; };

Which generates this systemd timer:

[Unit] [Timer] OnCalendar=0/12:00:00

And this unit (ignore the scary nix paths):

[Unit] Description=Run monitoring scripts [Service] Environment="LOCALE_ARCHIVE=/nix/store/vg0s4sl74f5ik64wrrx0q9n6m48vvmgs-glibc-locales-2.26-131/lib/locale/locale-archive" Environment="PATH=/nix/store/cb3slv3szhp46xkrczqw7mscy5mnk64l-coreutils-8.29/bin:/nix/store/364b5gkvgrm87bh1scxm5h8shp975n0r-findutils-4.6.0/bin:/nix/store/s63b2myh6rxfl4aqwi9yxd6rq66djk33-gnugrep-3.1/bin:/nix/store/navldm477k3ar6cy0zlw9rk43i459g69-gnused-4.4/bin:/nix/store/f9dbl8y4zjgr81hs3y3zf187rqv83apz-systemd-237/bin:/nix/store/cb3slv3szhp46xkrczqw7mscy5mnk64l-coreutils-8.29/sbin:/nix/store/364b5gkvgrm87bh1scxm5h8shp975n0r-findutils-4.6.0/sbin:/nix/store/s63b2myh6rxfl4aqwi9yxd6rq66djk33-gnugrep-3.1/sbin:/nix/store/navldm477k3ar6cy0zlw9rk43i459g69-gnused-4.4/sbin:/nix/store/f9dbl8y4zjgr81hs3y3zf187rqv83apz-systemd-237/sbin" Environment="TZDIR=/nix/store/brib029xs79az5vhjd5nhixp1l39ni31-tzdata-2017c/share/zoneinfo" ExecStart=/nix/store/77bsskn86yf6h11mx96xkxm9bqv42kqg-zsh-5.5.1/bin/zsh --login -c ./monitor.sh Group=users User=barrucadu WorkingDirectory=/home/barrucadu/monitoring-scripts

The only thing left to do was to test the whole set-up by simulating a hardware failure.

I powered off nyarlathotep, unplugged a drive, and booted it back up again. I then ran the monitoring script directly, to ensure that it worked, and then waited until midnight (which was closer than noon, at the time I was doing this) to check that the timer worked.

Both SMSes and emails came through:

]]>This got me thinking about *market values*. If I want to see the current market value of all my assets, I need to convert them all to the same currency, using a recent exchange rate. So I now have a script to fetch, once a day, exchange rates between £ and everything else:

P 2018-05-30 BTC £5501.58 P 2018-05-30 ETH £413.01 P 2018-05-30 LTC £87.85 P 2018-05-30 EUR £0.8775 P 2018-05-30 JPY £0.0069 P 2018-05-30 USD £0.7531 P 2018-05-30 VANEA £210.24

My script exports market values to influxdb, so I can see how the market value of my assets (in £) has changed over time. Great!

But what if I want to see the market value in a currency other than £? Like USD, for instance? The problem is that I have all these exchange rates:

But I don’t have, say, the exchange rate from EUR to USD.

Well it turns out that the reflexive-symmetric-transitive closure of that graph is just the thing I want! It looks pretty nasty with 7 currencies, so here it is with just 3:

Let’s see how to calculate those `?`

s.

I could pull in a functional graph library, but the graphs I’m dealing with are so small that I may as well just implement the few operations I need myself.

A graph is essentially a function `node -> node -> Maybe label`

:

import Data.Map (Map) import qualified Data.Map as M type Graph node label = Map node (Map node label)

We need an empty graph and, given a graph, we need to be able to add nodes and edges. As our nodes are the keys in the map, they need to be orderable.

-- | A graph with no nodes or edges. empty :: Ord n => Graph n l empty = M.empty -- | Add a node to a graph. addNode :: Ord n => n -> Graph n l -> Graph n l addNode n = M.insertWith (\_ old -> old) n M.empty

We don’t allow duplicate edges, as that means we have two exchange rates between the same pair of currencies, which doesn’t make much sense. So adding edges is a little more involved, as the edge might already exist:

-- | Add an edge to a graph, combining edges if they exist. -- -- If the source node doesn't exist, does not change the graph. addEdge :: Ord n => (l -> l -> l) -- ^ Function to combine edge labels. -> n -- ^ Source node. -> n -- ^ Target node. -> l -- ^ New label. -> Graph n l -> Graph n l addEdge combine from to label graph = case M.lookup from graph of Just edges -> let edges' = M.insertWith combine to label edges in M.insert from edges' graph Nothing -> graph

Ok, so we can represent our currency graph. Now we need to compute the reflexive-symmetric-transitive closure.

Reflexivity lets us go from a currency to itself:

-- | Take the reflexive closure by adding edges with the given label -- where missing. reflexiveClosure :: Ord n => l -> Graph n l -> Graph n l reflexiveClosure label graph = foldr (.) id [ addEdge (\_ old -> old) nA nA label | nA <- M.keys graph ] graph

If we know a exchange rate from A to B, symmetry gives us an exchange rate from B to A:

-- | Take the symmetric closure by adding new edges, transforming -- existing labels. symmetricClosure :: Ord n => (l -> l) -> Graph n l -> Graph n l symmetricClosure mk graph = foldr (.) id [ addEdge (\_ old -> old) nB nA (mk lAB) | (nA, edges) <- M.assocs graph , (nB, lAB) <- M.assocs edges ] graph

If we know an exchange rate from A to B, and from B to C, transitivity gives us an exchange rate from A to C:

-- | Take the transitive closure by adding new edges, combining -- existing labels. transitiveClosure :: (Ord n, Eq l) => (l -> l -> l) -> Graph n l -> Graph n l transitiveClosure combine = fixEq step where fixEq f = find . iterate f where find (a1:a2:as) | a1 == a2 = a1 | otherwise = find (a2:as) step graph = foldr (.) id [ addEdge (\_ old -> old) nA nC (combine lAB lBC) | (nA, edges) <- M.assocs graph , (nB, lAB) <- M.assocs edges , (nC, lBC) <- M.assocs (M.findWithDefault M.empty nB graph) ] graph

Exchange rates have three properties which we can make use of:

Any currency has an exchange rate with itself of 1.

If we have an exchange rate of

`x`

from A to B, then the rate from B to A is`1/x`

.If we have an exchange rate of

`x`

from A to B, and an exchange rate of`y`

from B to C, then the rate from A to C is`x*y`

.

So, given our graph of exchange rates, we can fill in the blanks like so:

-- | Fill in the blanks in an exchange rate graph. completeRates :: (Ord n, Eq l, Fractional l) => Graph n l -> Graph n l completeRates = transitiveClosure (*) . symmetricClosure (1/) . reflexiveClosure 1

There’s also a fourth property we can assume in reality:

- Any two paths between the same two currencies work out to the same exchange rate.

Otherwise we could make a profit by going around in a circle, and I’m sure someone would have noticed that already and made a lot of money. In our implementation however, we can’t assume that. Exchange rates available online have limited precision, and rounding errors will introduce more problems. But in general things will be close, so it doesn’t matter too much from the perspective of getting a rough idea of our personal finances.

So now I can look at my total assets in yen and feel like a millionaire:

]]>