This is binary search, implemented in Python:

def binary_search(haystack, needle): lo = 0 hi = len(haystack) - 1 found = None iterations = 0 while lo <= hi and found is None: iterations += 1 mid = lo + (hi - lo) // 2 if haystack[mid] == needle: found = mid elif haystack[mid] > needle: hi = mid - 1 else: lo = mid + 1 return (found, iterations)

It returns both the index of the found element and the number of iterations, for reasons which will become apparent in section 3.

How do we know it’s right? Well, let’s test it. I decided to do this with Hypothesis, a property-based testing tool. Here’s a property that an element in the list is found by `binary_search`

:

from hypothesis import given from hypothesis.strategies import lists, integers @given( haystack=lists(integers(), min_size=1), index=integers() ) def test_needle_in_haystack(haystack, index): haystack.sort() needle = haystack[index % len(haystack)] found_index, _ = binary_search(haystack, needle) assert found_index >= 0 assert found_index < len(haystack) assert haystack[found_index] == needle

Given a sorted nonempty list of integers, and an index into that list, the element at that position should be found by `binary_search`

.

We should also test the other case, elements *not* in the list shouldn’t have an index returned:

@given( haystack=lists(integers()), needle=integers() ) def test_needle_might_be_in_haystack(haystack, needle): haystack.sort() found_index, _ = binary_search(haystack, needle) if needle in haystack: assert found_index >= 0 assert found_index < len(haystack) assert haystack[found_index] == needle else: assert found_index is None

Binary search is pretty good, but I found myself wondering one day while doing Advent of Code if we could do better by not splitting the search space in the middle, but biasing our split by assuming the data is distributed linearly. After all, if you look in a dictionary for “binary” you don’t start by opening it to “M”.

This is interpolation search, it’s like binary search, but different:

def interpolation_search(haystack, needle): lo = 0 hi = len(haystack) - 1 found = None iterations = 0 while lo <= hi and found is None: iterations += 1 if needle < haystack[lo] or needle > haystack[hi]: # a new special case break elif haystack[lo] == haystack[hi]: # a new special case if needle == haystack[lo]: found = lo else: break else: # a more complex calculation mid = lo + int((((hi - lo) / (haystack[hi] - haystack[lo])) * (needle - haystack[lo]))) if haystack[mid] == needle: found = mid elif haystack[mid] > needle: hi = mid - 1 else: lo = mid + 1 return (found, iterations)

It’s a bit more complex, we’ve got two new special cases: one for if the needle is not in the haystack at all, and one for if all the elements in the haystack are equal. We’ve also got a more complex `mid`

calculation, trying to figure out where in the haystack the needle will appear.

We can use Hypothesis to compare our two search functions against each other:

@given( haystack=lists(integers()), needle=integers() ) def test_interpolation_equiv_binary(haystack, needle): haystack.sort() found_index_b, _ = binary_search(haystack, needle) found_index_i, _ = interpolation_search(haystack, needle) if found_index_b is None: assert found_index_i is None else: assert found_index_i is not None assert haystack[found_index_b] == haystack[found_index_i]

This is a common trick with property-based testing (and lots of types of testing, really): implement a simpler version of your thing and test that the more complex “real” implementation behaves the same as the simpler “test” implementation.

I intentionally didn’t do this:

@given( haystack=lists(integers()), needle=integers() ) def test_interpolation_equal_binary(haystack, needle): haystack.sort() found_index_b, _ = binary_search(haystack, needle) found_index_i, _ = interpolation_search(haystack, needle) assert found_index_b == found_index_i

Because the functions can differ if the needle is present in the haystack multiple times (eg, looking for `0`

in `[0,0,1]`

), and that’s fine.

Given our fancy midpoint calculation, the interpolation search *must* be better than (ie, do no more iterations than) binary search, right?

@given( haystack=lists(integers(), min_size=1), index=integers() ) def test_interpolation_beats_binary(haystack, index): haystack.sort() needle = haystack[index % len(haystack)] _, iterations_b = binary_search(haystack, needle) _, iterations_i = interpolation_search(haystack, needle) assert iterations_i <= iterations_b

Wrong.

==================================== FAILURES =================================== ________________________ test_interpolation_beats_binary ________________________ @given( > haystack=lists(integers(), min_size=1), index=integers() ) def test_interpolation_beats_binary(haystack, index): interpolation-search.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ haystack = [0, 1, 3], index = 64 @given( haystack=lists(integers(), min_size=1), index=integers() ) def test_interpolation_beats_binary(haystack, index): haystack.sort() needle = haystack[index % len(haystack)] _, iterations_b = binary_search(haystack, needle) _, iterations_i = interpolation_search(haystack, needle) > assert iterations_i <= iterations_b E assert 2 <= 1 interpolation-search.py:111: AssertionError ---------------------------------- Hypothesis ----------------------------------- Falsifying example: test_interpolation_beats_binary(haystack=[0, 1, 3], index=64) ====================== 1 failed, 3 passed in 0.34 seconds =======================

We have a counterexample where binary search wins: with the list `[0, 1, 3]`

and the index 64 (which gives a `needle`

of 1), binary search finds it in 1 iteration but interpolation search takes 2.

Let’s step through that example:

iteration | binary search | interpolation search | ||||||
---|---|---|---|---|---|---|---|---|

lo | hi | mid | found | lo | hi | mid | found | |

0 | 0 | 2 | False | 0 | 2 | False | ||

1 | 0 | 2 | 1 | True | 0 | 2 | 0 | False |

2 | 0 | 1 | 1 | True |

In iteration 1, the binary search picks the middle element, which is the right answer. But the interpolation search doesn’t. It’s thrown off by the assumption we’ve made in the `mid`

calculation: that the values will be linearly distributed. If they’re not, the biasing of the interpolation search towards one end of the search space will work against us.

Sadly, my idle thought about a biased search hasn’t revolutionised computer science. Better luck next time.

]]>- awsfiles has my AWS infrastructure
- dotfiles has my user-level configuration
- nixfiles has my system-level configuration

So really I just need to back up my data and those git repositories.

I store my backups in S3, and move them to the lower-cost (but harder-to-access) Glacier storage after 64 days. I use terraform to provision all my AWS stuff, including this backup location:

resource "aws_s3_bucket" "backup" { bucket = "barrucadu-backups" acl = "private" versioning { enabled = true } lifecycle_rule { id = "archive" enabled = true transition { days = 32 storage_class = "STANDARD_IA" } transition { days = 64 storage_class = "GLACIER" } } }

There’s also an IAM policy granting access to the bucket:

resource "aws_iam_policy" "tool_duplicity" { policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "${aws_s3_bucket.backup.arn}", "${aws_s3_bucket.backup.arn}/*" ] } ] } EOF }

This is the minimal set of permissions to run duplicity, I think. The bucket itself is versioned, but I don’t grant the backup user any versioning-related permissions (eg, they can’t delete an old version of a file). This is so that if the credentials for the backup user get leaked somehow, and someone deletes or overwrites my backups, I can recover them. The backups are encrypted, so someone downloading them is only a small concern.

Because I don’t take full filesystem backups I have two parts to my backup scripts. The main script:

- Checks for a host-specific backup script (not all hosts take backups)
- Creates a temporary directory for the backup to be generated in
- Runs the host-specific script
- Uses duplicity to generate a full or incremental backup, targetting the S3 bucket

It looks like this:

#!/bin/sh set -e # location of scripts BACKUP_SCRIPT_DIR=$HOME/backup-scripts # hostname MY_HOST=`hostname` # aws config AWS_S3_BUCKET="barrucadu-backups" BACKUP_TYPE=$1 if [[ -z "$BACKUP_TYPE" ]]; then echo 'specify a backup type!' exit 1 fi if [[ -x "${BACKUP_SCRIPT_DIR}/host-scripts/${MY_HOST}" ]]; then DIR=`mktemp -d` trap "rm -rf $DIR" EXIT cd $DIR # generates a backup in ./$MY_HOST time $BACKUP_SCRIPT_DIR/host-scripts/$MY_HOST time $BACKUP_SCRIPT_DIR/duplicity.sh \ $BACKUP_TYPE \ $MY_HOST \ "s3+http://${AWS_S3_BUCKET}/${MY_HOST}" else echo 'nothing to do!' fi

The `duplicity.sh`

script sets some environment variables and common parameters:

#!/bin/sh set -e # location of scripts BACKUP_SCRIPT_DIR=$HOME/backup-scripts # aws config AWS_PROFILE="backup" if [[ ! -e $BACKUP_SCRIPT_DIR/passphrase.sh ]]; then echo 'missing passphrase file!' exit 1 fi source $BACKUP_SCRIPT_DIR/passphrase.sh export AWS_PROFILE=$AWS_PROFILE export PASSPHRASE=$PASSPHRASE nix run nixpkgs.duplicity -c \ duplicity \ --s3-european-buckets \ --s3-use-multiprocessing \ --s3-use-new-style \ --verbosity notice \ "$@"

Duplicity’s incremental backups are based on hashing chunks of files, so it can take incremental backups even though all the file modification times will have changed (because the backup is generated anew every time) since the last full backup.

The backups are encrypted with a 512-character password (the `PASSPHRASE`

environment variable in `duplicity.sh`

). The same password is used for all the backups, and each machine which takes backups has a copy of the password. The backups are useless if I lose the password, but for that to happen, I’d have to lose:

- Both of my home computers, in London
- A VPS, on a physical server in Nuremberg
- A dedicated server, in France somewhere

That seems pretty unlikely. Even if it does happen, any event (or sequence of events) which takes out those three locations in quick succession would probably give me big enough problems that not having a backup of my git repositories is a small concern—it could also take out my backups themselves, which are in Ireland.

These aren’t terribly interesting, or useful to anyone other than me, so I’ll just give an example rather than go through each one.

The script for dunwich, my VPS, backs up:

- All my public github repositories (I don’t have any private ones)
- All my self-hosted repositories (which are all private)
- My syncthing directory

It looks like this:

#! /usr/bin/env nix-shell #! nix-shell -i bash -p jq # I have no private github repos, and under 100 public ones; so this # use of the public API is fine. function clone_public_github_repos() { curl 'https://api.github.com/users/barrucadu/repos?per_page=100' 2>/dev/null | \ jq -r '.[].clone_url' | \ while read url; do git clone --bare "$url" done } function clone_all_dunwich_repos() { for dir in /srv/git/repositories/*.git; do url="git@dunwich.barrucadu.co.uk:$(basename $dir)" git clone --bare "$url" done } set -e [[ -d dunwich ]] && rm -rf dunwich mkdir dunwich cd dunwich cp -a $HOME/s syncthing mkdir git mkdir git/dunwich mkdir git/github.com pushd git/dunwich clone_all_dunwich_repos popd pushd git/github.com clone_public_github_repos popd

The script creates the backup inside an `dunwich`

directory: all the host-specific scripts generate their backup in a folder named after the host. This was useful in an earlier incarnation of my backup scripts, but isn’t really necessary now.

I run a full backup monthly, at midnight on the 1st. I run an incremental backup at 4am every Monday. The difference in times is to avoid overlap if the first of the month is a Monday (and I didn’t want to faff around with lock files).

The backups are taken by two systemd services which are defined in my NixOS configuration:

############################################################################# ## Backups ############################################################################# systemd.timers.backup-scripts-full = { wantedBy = [ "timers.target" ]; timerConfig = { OnCalendar = config.services.backup-scripts.OnCalendarFull; }; }; systemd.timers.backup-scripts-incr = { wantedBy = [ "timers.target" ]; timerConfig = { OnCalendar = config.services.backup-scripts.OnCalendarIncr; }; }; systemd.services.backup-scripts-full = { description = "Take a full backup"; serviceConfig.WorkingDirectory = config.services.backup-scripts.WorkingDirectory; serviceConfig.ExecStart = "${pkgs.zsh}/bin/zsh --login -c './backup.sh full'"; serviceConfig.User = config.services.backup-scripts.User; serviceConfig.Group = config.services.backup-scripts.Group; }; systemd.services.backup-scripts-incr = { description = "Take an incremental backup"; serviceConfig.WorkingDirectory = config.services.backup-scripts.WorkingDirectory; serviceConfig.ExecStart = "${pkgs.zsh}/bin/zsh --login -c './backup.sh incr'"; serviceConfig.User = config.services.backup-scripts.User; serviceConfig.Group = config.services.backup-scripts.Group; };

The working directory, user, group, and frequencies are all configurable—but so far no host overrides them. I thought about having a separate backup user, but decided that it didn’t gain any security but cost some convenience (as everything I want to back up is owned by my user anyway).

]]>I’m pleased to announce a new super-major release of dejafu, a library for testing concurrent Haskell programs.

While there are breaking changes, common use-cases shouldn’t be affected too significantly (or not at all). There is a brief guide to the changes, and how to migrate if necessary, on the website.

dejafu is a unit-testing library for concurrent Haskell programs. Tests are deterministic, and work by systematically exploring the possible schedules of your concurrency-using test case, allowing you to confidently check your threaded code.

HUnit and Tasty bindings are available.

dejafu requires your test case to be written against the `MonadConc`

typeclass from the concurrency package. This is a necessity, dejafu cannot peek inside your `IO`

or `STM`

actions, so it needs to be able to plug in an alternative implementation of the concurrency primitives for testing. There is some guidance for how to switch from `IO`

code to `MonadConc`

code on the website.

If you really need `IO`

, you can use `MonadIO`

- but make sure it’s deterministic enough to not invalidate your tests!

Here’s a small example reproducing a deadlock found in an earlier version of the auto-update library:

> :{ autocheck $ do auto <- mkAutoUpdate defaultUpdateSettings auto :} [fail] Successful [deadlock] S0--------S1-----------S0- [fail] Deterministic [deadlock] S0--------S1-----------S0- () S0--------S1--------p0--

dejafu finds the deadlock, and gives a simplified execution trace for each distinct result. More in-depth traces showing exactly what each thread did are also available. This is using a version of auto-update modified to use the `MonadConc`

typeclass. The source is in the dejafu testsuite.

The highlights for this release are setup actions, teardown actions, and invariants:

**Setup actions**are for things which are not really a part of your test case, but which are needed for it (for example, setting up a test distributed system). As dejafu can run a single test case many times, repeating this work can be a significant overhead. By defining this as a setup action, dejafu can “snapshot” the state at the end of the action, and efficiently reload it in subsequent executions of the same test.**Teardown actions**are for things you want to run after your test case completes, in all cases, even if the test deadlocks (for example). As dejafu controls the concurrent execution of the test case, inspecting shared state is possible even if the test case fails to complete.**Invariants**are effect-free atomically-checked conditions over shared state which must always hold. If an invariant throws an exception, the test case is aborted, and any teardown action run.

Here is an example of a setup action with an invariant:

> :{ autocheck $ let setup = do var <- newEmptyMVar registerInvariant $ do value <- inspectMVar var when (value == Just 1) $ throwM Overflow pure var in withSetup setup $ \var -> do fork $ putMVar var 0 fork $ putMVar var 1 tryReadMVar var :} [fail] Successful [invariant failure] S0--P2- [fail] Deterministic [invariant failure] S0--P2- Nothing S0---- Just 0 S0--P1--S0--

In the `[invariant failure]`

case, thread 2 is scheduled, writing the forbidden value “1” to the MVar, which terminates the test.

Here is an example of a setup action with a teardown action:

> :{ autocheck $ let setup = newMVar () teardown var (Right _) = show <$> tryReadMVar var teardown _ (Left e) = pure (show e) in withSetupAndTeardown setup teardown $ \var -> do fork $ takeMVar var takeMVar var :} [pass] Successful [fail] Deterministic "Nothing" S0--- "Deadlock" S0-P1--S0-

The teardown action can perform arbitrary concurrency effects, including inspecting any mutable state returned by the setup action.

Setup and teardown actions were previously available in a slightly different form as the `dontCheck`

and `subconcurrency`

functions, which have been removed (see the migration guide if you used these).

Haskell typeclass instances have two parts: some *constraints*, and the *instance head*:

newtype WrappedFunctor f a = WrappedFunctor (n a) instance Functor f => Functor (WrappedFunctor f) where -- ^^^^^^^^^ constraints -- ^^^^^^^^^^^^^^^^^^^^^^^^^^ head fmap f (WrappedFunctor fa) = WrappedFunctor (fmap f fa)

More specifically, the head is of the form `C (T a1 ... an)`

, where `C`

is the class, `T`

is a type constructor, and `a1 ... an`

are distinct type variables.The `FlexibleInstances`

extension relaxes this restriction a little, allowing some (or all) of the `a1 ... an`

to be arbitrary types, as well as type variables.

When the type checker needs to find an instance, it does so purely based on the head, constraints don’t come into it. The instance above means “whenever you use `WrappedFunctor f`

as a functor, *regardless of what f is and even if we don’t know what it is yet*, then you can use this instance”, and a type error will be thrown if whatever concrete type

`f`

is instantiated to doesn’t in fact have a functor instance.You might think that, if we didn’t define the instance above and instead defined this one:

instance Functor (WrappedFunctor Maybe) where fmap f (WrappedFunctor fa) = WrappedFunctor (fmap f fa)

…and then used a `WrappedFunctor f`

as a functor, that the type checker would infer `f`

must be `Maybe`

. This is not so! Typeclass inference happens under an “open world” approach: just because only one instance is known *at this time* doesn’t mean there won’t be a second instance discovered later. Prematurely selecting the instance for `WrappedFunctor Maybe`

could be unsound.

In GHC Haskell, we can express a constraint that two types have to be equal. For example, this is a weird way to check that two values are equal:

-- this requires GADTs or TypeFamilies funnyEq :: (Eq a, a ~ b) => a -> b -> Bool funnyEq = (==)

We only have a constraint `Eq a`

, not `Eq b`

, but because of the `a ~ b`

constraint, the type checker knows that they’re the same type:

> funnyEq 'a' 'b' False > funnyEq True True True > funnyEq True 'b' <interactive>:22:1: error: • Couldn't match type ‘Bool’ with ‘Char’ arising from a use of ‘funnyEq’ • In the expression: funnyEq True 'b' In an equation for ‘it’: it = funnyEq True 'b'

Let’s put the two together now. Let’s throw away the two instances we defined above, and now look at this one:

instance (f ~ Maybe) => Functor (WrappedFunctor f) where fmap f (WrappedFunctor fa) = WrappedFunctor (fmap f fa)

This instance means “whenever you use `WrappedFunctor f`

as a functor, *regardless of what f is and even if we don’t know what it is yet*, then you can use this instance”, and a type error will be thrown if

`f`

cannot be instantiated to `Maybe`

. This is different to the instance `Functor (WrappedFunctor Maybe)`

!If we have

`instance Functor (WrappedFunctor Maybe)`

:> :t fmap (+1) (WrappedFunctor (pure 3)) :: (Num b, Applicative f, Functor (WrappedFunctor f)) => WrappedFunctor f b

If we have

`instance (f ~ Maybe) => Functor (WrappedFunctor f)`

:> :t fmap (+1) (WrappedFunctor (pure 3)) :: Num b => WrappedFunctor Maybe b

With the latter, we get much better type inference. The downside is that this instance overlaps any more concrete instances, so we couldn’t (for example) define an instance for `WrappedFunctor Identity`

as well.

But if you only need one instance, it’s a neat trick.

Here’s a concrete example from the dejafu-2.0.0.0 branch. I’ve introduced a `Program`

type, to model concurrent programs. There’s one sort of `Program`

, a `Program Basic`

, which can be used as a concurrency monad (a `MonadConc`

) directly. The instances are defined like so:

instance (pty ~ Basic, MonadIO n) => MonadIO (Program pty n) where -- ... instance (pty ~ Basic) => MonadTrans (Program pty) where -- ... instance (pty ~ Basic) => MonadCatch (Program pty n) where -- ... instance (pty ~ Basic) => MonadThrow (Program pty n) where -- ... instance (pty ~ Basic) => MonadMask (Program pty n) where -- ... instance (pty ~ Basic, Monad n) => MonadConc (Program pty n) where -- ...

If instead the instances has been defined for `Program Basic n`

, then the type checker would have complained that the `pty`

parameter is (in many cases) polymorphic, and not use these instances. This means every single use of a `Program pty n`

, where `pty`

was not otherwise constrained, would need a type annotation. By instead formulating the instances this way, the type checker *knows* that if you use a `Program pty n`

as a `MonadConc`

, then it must be a `Program Basic n`

.

This has turned a potentially huge breaking change, requiring everyone who uses dejafu to add type annotations to their tests, into something which just works.

]]>You can express a bunch of interesting problems in terms of ILP, and there are solvers which do a pretty good job of finding good solutions quickly. One of those interesting problems is scheduling, and there’s a nice write-up of how PyCon uses an ILP solver to generate schedules.

Another problem is rota generation, which is after all just a sort of scheduling. I have implemented a rota generator for GOV.UK’s technical support, and this memo is about how it works.

What is a rota?

Well, there are a bunch of time slots \(\mathcal T\), roles \(\mathcal R\), and people \(\mathcal P\). We can represent the assignments as a 3D binary matrix:

\[ \begin{split} A_{tpr} = \begin{cases} 1,&\text{ if, in time }t\text{, person }p\text{ is scheduled in role }r\\ 0,&\text{otherwise} \end{cases} \end{split} \]Next we need some constraints on what a valid rota looks like.

For every pair of slots and roles, the sum of the assignments should be 1:

\[ \forall t \in \mathcal T \text{, } \forall r \in \mathcal R \text{, } \sum_{p \in \mathcal P} A_{tpr} = 1 \]For every pair of slots and people, the sum of the assignments should be 0 (if they’re not assigned anything) or 1 (if they are):

\[ \forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } \sum_{r \in \mathcal R} A_{tpr} \in \{0, 1\} \]We might give our people time off (how generous!), so there’s no point in generating a rota where someone gets scheduled during their time off.

Given a function \(leave : \mathcal P \mapsto 2^{\mathcal T}\), which gives the set of slots someone is on leave, then: for every pair of slots and people, all roles should be unassigned if the slot is in \(leave(p)\):

\[ \forall p \in \mathcal P \text{, } \forall t \in leave(p) \text{, } \forall r \in \mathcal R \text{, } A_{tpr} = 0 \]We might also have a maximum number of shifts any one person can be assigned to in a rota.

Given such a limit \(M\), then: for every person, the sum of the assignments across *all* slots should be less than or equal to \(M\):

If all we wanted was constraints, then we could use a SAT solver, and it would probably do a better job than an ILP solver as a SAT solver is *built* for solving boolean constraints! But there’s one thing which is more easily expressible to an ILP solver than a SAT solver: objective functions to optimise.

Given our above constraints, we will get *a* rota, but it might not be very fair. One person might be scheduled ten times, and another not at all. We can encourage the solver to be more fair by providing it with an objective which results in more people being assigned.

First we’ll need an auxiliary variable to check whether someone has been assigned at all:

\[ \begin{split} X_p = \begin{cases} 1,&\text{ if person }p\text{ has any assignments}\\ 0,&\text{otherwise} \end{cases} \end{split} \]We can use two new constraints to set the value of these \(X\) variables:

\[ \forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } \forall r \in \mathcal R \text{, } X_p \geqslant A_{tpr} \] \[ \forall p \in \mathcal P \text{, } X_p \leqslant \sum_{t \in \mathcal T} \sum_{r \in \mathcal R} A_{tpr} \]As both \(A_{tpr}\) and \(X_p\) are binary variables, this means \(X_p\) will be 1 if (first constraint) and only if (second constraint) person \(p\) has any assignments at all.

We then give an objective to the solver:

\[ \textbf{maximise } \sum_{p \in \mathcal P} X_p \]The only way to increase the value of the sum is by assigning roles to more people, so that is what the solver will do.

PuLP is a Python library for interfacing with ILP solvers. It provides a somewhat nicer interface than directly dealing with the matrices and vectors on which ILP solvers operate, letting us express constraints as equations much like I have here.

Here’s how to express the above with PuLP:

import pulp # Parameters slots = 0 people = [] roles = [] leave = {} max_assignments_per_person = 0 # Create the 'problem' problem = pulp.LpProblem("rota generator", sense=pulp.LpMaximize) # Create variables assignments = pulp.LpVariable.dicts("A", ((slot, person, role) for slot in range(slots) for person in people for role in roles), cat="Binary") is_assigned = pulp.LpVariable.dicts("X", people, cat="Binary") # Add constraints for slot in range(slots): for role in roles: # In every time slot, each role is assigned to exactly one person problem += pulp.lpSum(assignments[slot, person, role] for person in people) == 1 for person in people: # Nobody is assigned multiple roles in the same time slot problem += pulp.lpSum(assignments[slot, person, role] for role in roles) <= 1 for person, bad_slots in leave.items(): for slot in bad_slots: for role in roles: # Nobody is assigned a role in a slot they are on leave for problem += assignments[slot, person, role] == 0 for person in people: # Nobody works too many shifts problem += pulp.lpSum(assignments[slot, person, role] for slot in range(slots) for role in roles) <= max_assignments_per_person # Constrain 'is_assigned' auxiliary variable for slot in range(slots): for person in people: for role in roles: # If problem += is_assigned[person] >= assignments[slot, person, role] for person in people: # Only if problem += is_assigned[person] <= pulp.lpSum(assignments[slot, person, role] for slot in range(slots) for role in roles) # Add objective problem += pulp.lpSum(is_assigned[person] for person in people) # Solve with the Coin/Cbc solver problem.solve(pulp.solvers.COIN_CMD()) # Print the solution! for slot in range(slots): print(f"Slot {slot}:") for role in roles: for person in people: if pulp.value(assignments[slot, person, role]) == 1: print(f" {role}: {person}")

The quantifiers have become `for...in`

loops and the summations have become calls to `pulp.lpSum`

with a generator expression iterating over the values of interest, but other than that it’s fairly straightforward.

With the parameters:

slots = 5 people = ["Spongebob", "Squidward", "Mr. Crabs", "Pearl"] roles = ["Fry Cook", "Cashier", "Money Fondler"] leave = {"Mr. Crabs": [0,2,3,4]} max_assignments_per_person = 5

We get the output:

Slot 0: Fry Cook: Pearl Cashier: Squidward Money Fondler: Spongebob Slot 1: Fry Cook: Spongebob Cashier: Mr. Crabs Money Fondler: Pearl Slot 2: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 3: Fry Cook: Spongebob Cashier: Pearl Money Fondler: Squidward Slot 4: Fry Cook: Squidward Cashier: Spongebob Money Fondler: Pearl

If you play around with this you might notice two things:

The rota you get is always the same.

If there is no rota which meets the constraints, you get rubbish out!

This is due to how Cbc works. If you try GLPK, a different solver, you’ll still get a deterministic rota, but if there isn’t one meeting the constraints you’ll (probably) get back an empty rota. Solving ILP in the general case is NP-complete, so solvers use heuristics. Both Cbc and GLPK are deterministic, but they differ in heuristics.

You can check the `problem.status`

to see if it’s solved or not:

if problem.status != pulp.constants.LpStatusOptimal: raise Exception("Unable to solve problem.")

Another way to make the solver go wrong is by having a wide range of values in your problem. I’m not sure why this can cause a problem, but it does.

A simple way to introduce randomisation is to add give the solver a randomly generated objective to maximise. For example, we can assign a score to every possible allocation, and try to maximise the overall score:

import random randomise = pulp.lpSum(random.randint(0, 1) * assignments[slot, person, role] for slot in range(slots) for person in people for role in roles)

As we want the actual objective function to take priority, scale it up:

# Add objective problem += pulp.lpSum(is_assigned[person] for person in people) * 100 + randomise

Now if we run the tool multiple times, we get different rotas:

$ python3 rota.py Slot 0: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 1: Fry Cook: Pearl Cashier: Spongebob Money Fondler: Mr. Crabs Slot 2: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 3: Fry Cook: Squidward Cashier: Spongebob Money Fondler: Pearl Slot 4: Fry Cook: Squidward Cashier: Pearl Money Fondler: Spongebob $ python3 rota.py Slot 0: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 1: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Mr. Crabs Slot 2: Fry Cook: Spongebob Cashier: Squidward Money Fondler: Pearl Slot 3: Fry Cook: Pearl Cashier: Squidward Money Fondler: Spongebob Slot 4: Fry Cook: Squidward Cashier: Spongebob Money Fondler: Pearl

The downside to this approach is that we might accidentally generate a random objective which is really hard to maximise, making the solver do a lot of work when all we really want is an arbitrary solution.

The GOV.UK support rota is a bit more complex than the example above. A typical rota runs for 12 weeks, with 1 week being 1 slot, in the above parlance. There are two types of roles, and constraints about who can occupy which roles:

**In-hours support roles:***Primary in-hours*, must have been secondary in-hours at least three times.*Secondary in-hours*, must have been shadow at least two times.*Shadow*, must not have shadowed twice before. This role is optional.

**Out-of-hours support roles:***Primary on-call*, no special requirements.*Secondary on-call*, must have been primary on-call at least three timesThere’s an asymmetry there: the primary in-hours needs to be experienced, but the opposite is the case for on-call roles. This is intentional! If the primary on-call were more experienced, they would resolve every issue themselves and the less experienced one would never get to learn anything.

.

There are separate pools for each type: there are some people who can do in-hours support, some people who can do out-of-hours support, and some people who can do both.

To ensure individuals and teams aren’t over-burdened with support roles, there are some constraints about when people can be scheduled:

- Someone can’t be on in-hours support in two adjacent weeks.
- Two people on in-hours support in the same week (or adjacent weeks) can’t be on the same team.

And there is also a limit on the number of in-hours and out-of-hours roles someone can do across the entire rota.

The objective function is a bit more complex too:

- As above, we want to maximise the number of people on the rota.
- We want to maximise the number of weeks when the secondary in-hours has done it fewer than three times.
- We want to maximise the number of weeks when the primary out-of-hours has done it fewer than three times.
- We want to maximise the number of weeks with a shadow.

I won’t go through all of the constraints, as they’re mostly more of the same, but this is an example of a particularly interesting constraint, as it’s pretty hard to implement.

The logic here is simple, but the language of ILP is very limited: you can’t directly express `if...then`

-style constraints between variables. Now, this is fine if we want to limit the primary in-hours role to people who have been secondary in-hours at least three times *before this rota period*, as we can statically determine that:

But that’s too restrictive. If someone has been secondary in-hours two times before the start of the rota, and is secondary in-hours in one week, they should be able to be primary in-hours in subsequent weeks.

To work around this we’ll need some auxiliary variables.

Firstly, let’s record how many times someone has been a secondary at the start of each slot:

\[ \begin{split} S_{tp} = \begin{cases} \text{the number of times person }p\text{ has been a secondary before the start of this rota},&\text{ if }t = 0\\ S_{t-1,p} + A_{t-1,p,\text{secondary}},&\text{otherwise} \end{cases} \end{split} \]Unlike previous variables we’ve seen, this is not a binary variable. But it is still an integral variable. Translating the above into ILP constraints is straightforward:

\[ \forall p \in \mathcal P \text{, } S_{0,p} = \text{the number of times person }p\text{ (etc)} \] \[ \forall t \geqslant 1 \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } S_{tp} = S_{t-1,p} + A_{t-1,p,\text{secondary}} \]Now we can use a trick I found to encode conditionals in ILP. The trick is to introduce an auxiliary variable, \(D \in \{0,1\}\), and use constraints to ensure that \(D = 0\) when the condition goes one way, and \(D = 1\) when it goes the other.

Here is how we encode `if X > k then Y >= 0 else Y <= 0`

, where `k`

is constant:

Here \(X\) and \(Y\) are the ILP variables from our conditional, \(D\) is the auxiliary variable we introduced, and \(m\) is some large constant, way bigger than the possible maximum values of \(X\) or \(Y\). Let’s walk through this, firstly here’s the case where \(D = 0\):

\[ \begin{align} 0 &\lt X - k \\ 0 &\leqslant Y \\ X - k &\leqslant m \\ Y &\leqslant m \end{align} \]Because \(m\) is a large constant, the bottom two constraints are trivially true, so they can be removed. With a little rearranging, we have:

\[ \begin{align} k &\lt X \\ 0 &\leqslant Y \\ \end{align} \]So if \(D = 0\), \(X\) is strictly greater than \(k\) (the condition is true), and \(Y \geqslant 0\). That’s the true branch sorted!

Now let’s look at the \(D = 1\) branch:

\[ \begin{align} 0 &\lt X - k + m \\ 0 &\leqslant Y + m \\ X - k &\leqslant 0 \\ Y &\leqslant 0 \end{align} \]Because \(m\) is a large constant, this time we can get rid of the first two constraints. With a little rearranging, we get:

\[ \begin{align} X &\leqslant k \\ Y &\leqslant 0 \end{align} \]So if \(D = 1\), \(X\) is not strictly greater than \(k\) and \(Y\) is at most zero. Remember, the real “\(Y\)” we’re using is an \(A_{tpr}\) value, which is a binary value, so the overall effect is to specify that it must be zero. Adding a constraint \(Y \geqslant 0\) would do the same job.

Each conditional needs a fresh \(D\) variable. So adding these conditionals in results in a lot of extra variables and constraints:

\[ \begin{alignat*}{4} &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } & 0 &\lt S_{tp} - 2 + 999 \times D_{tp} \\ &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } &0 &\leqslant A_{tp,\text{primary}} + 999 \times D_{tp} \\ &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } &S_{tp} - 2 &\leqslant 999 \times (1 - D_{tp}) \\ &\forall t \in \mathcal T \text{, } \forall p \in \mathcal P \text{, } &A_{tp,\text{primary}} &\leqslant 999 \times (1 - D_{tp}) \end{alignat*} \]Here 2 has been substituted for \(k\), as someone needs to have been a secondary at least three times to be a primary; and 999 has been substituted for \(m\), which is larger than the number of secondary shifts someone could actually have done.

Let’s cover one more type of constraint: not over-burdening teams by taking all of their members away to be on support at once. This one is pretty simple, but does require a bit more information about the people, specifically, what team they’re on.

Given a function \(team : \mathcal P \mapsto 2^{\mathcal P}\), which gives the set of people on the same team as another, then: for every pair of slots and people, there should be no overlap in the in-hours assignments if the two people are on the same team:

\[ \forall t \in \mathcal T \text{, } \forall p_1 \in \mathcal P \text{, } \forall p_2 \neq p_1 \in team(p_1) \text{, } \\ \forall r_1 \in \{\text{primary}, \text{secondary}, \text{shadow}\} \text{, } \\ \forall r_2 \in \{\text{primary}, \text{secondary}, \text{shadow}\} \text{, } \\ A_{t,p_1,r_1} + A_{t,p_2,r_2} \leqslant 1 \]My GOV.UK rota generator is on GitHub, and also on Heroku as The Incredible Rota Machine.

I’ve timed it on my laptop by running it repeatedly overnight, and found that the time to generate a rota varies between about 10s and 15m, but the median is about 30s. I expect it’ll be slower on Heroku, though.

It’s already paying off, I saved the person who usually puts together the rota an hour and a half! A new rota is needed every quarter, and it took me three and a half days to make, so it’ll pay for itself in a mere four and a half years!

It was a fun project, and a neat thing to do in firebreak—the one-week “do whatever you want as long as it’s useful” gap we have between quarters—but probably not worth it if you’re looking to save a bit of time.

]]>The C standard bakes in enough details about pointers such that the amount of memory a C program can access (even on a hypothetical infinite-memory machine) is bounded and statically known. Access to an unbounded amount of memory is necessary (but not sufficient) for Turing completeness. Therefore C is not Turing complete.

This is an argument about the

*specification*of C, not any particular*implementation*. The fact that no real machine has unbounded memory is totally irrelevant.This is not a criticism of C.

A friend told me that C isn’t actually Turing-complete due to the semantics of pointers, so I decided to dig through the (C11) spec to find evidence for this claim. The two key bits are 6.2.6.1.4 and 6.5.9.5:

Values stored in non-bit-field objects of any other object type consist of

`n × CHAR_BIT`

bits, where`n`

is the size of an object of that type, in bytes. The value may be copied into an object of type`unsigned char [n]`

(e.g., by`memcpy`

); the resulting set of bytes is called the object representation of the value. Values stored in bit-fields consist of`m`

bits, where`m`

is the size specified for the bit-field. The object representation is the set of`m`

bits the bit-field comprises in the addressable storage unit holding it. Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.

The important bit is the use of the definite article in the first sentence, “where `n`

is **the** size of an object of that type”, this means that all types have a size which is known statically.

Two pointers compare equal if and only if both are null pointers, both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function, both are pointers to one past the last element of the same array object, or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space.

Pointers to distinct objects of the same typeInterestingly, you could have a distinct heap for every type, with overlapping pointer values. And this is totally fine according to the spec! This doesn’t help you, however, because the number of types is finite: they’re specified in the text of the program, which is necessarily finite.

compare unequal. As pointers are fixed in size, this means that there’s only a finite number of them. You can take a pointer to any object “The unary `&`

operator yields the address of its operand.”, first sentence of 6.5.3.2.3.

, therefore there are a finite number of objects that can exist at any one time!

However, C is slightly more interesting than a finite-state machine. We have one more mechanism to store values: the return value of a function! Fortunately, the C spec doesn’t impose a maximum stack depth“Recursive function calls shall be permitted, both directly and indirectly through any chain of other functions.”, 6.5.2.2.11, nothing else is said on the matter.

, and so we can in principle implement a pushdown automata.

Just an interesting bit of information about C, because it’s so common to see statements like “because C is Turing-complete…”. Of course, on a real computer, nothing is Turing-complete, but C doesn’t even manage it in theory.

In a discussion about this on Twitter, the possibility of doing some sort of virtual memory shenanigans to make a pointer see different things depending on its context of use came up. I believe that this is prohibited by the semantics of object lifetimes (6.2.4.2):

The lifetime of an object is the portion of program execution during which storage is guaranteed to be reserved for it. An object exists, has a constant address, and retains its last-stored value throughout its lifetime. If an object is referred to outside of its lifetime, the behavior is undefined. The value of a pointer becomes indeterminate when the object it points to (or just past) reaches the end of its lifetime.

The lifetime for heap-allocated objects is from the allocation until the deallocation (7.22.3.1):

The lifetime of an allocated object extends from the allocation until the deallocation. Each such allocation shall yield a pointer to an object disjoint from any other object.

I had a fun discussion on IRC, where someone argued that the definition of pointer equality does not mention the object representation, therefore the fixed object representation size is irrelevant! Therefore, pointers could have extra information somehow which is not part of the object representation.

It took a while to resolve, but I believe the final sentence of the object representation quote and the first clause of the pointer equality quote, together with the fact that pointers are values, resolves this:

- Pointers are values.
Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.

- Points (1) and (2) mean that pointers with the same object representation compare equal.
Two pointers compare equal if and only if…

- The “only if” in (4) means that if two pointers compare equal, then the rest of the rules apply.
- Points (3) and (5) mean that two pointers with the same object representation compare equal, and therefore point to the same object (or are both null pointers, etc).

This means that there cannot be any further information that what is stored in the object representation.

Interestingly, I believe this forbids something I initially thought to be the case: I say in a footnote that different types could have different heaps. They *could*, but that doesn’t let you use the same object representation for pointers of different types!

Amazon Simple Notification Service (SNS) lets you set up “topics”, subscribe to them through a variety of protocols (including SMS and email), and send a message to a topic by hitting a web endpoint. This seemed the simplest way to get my computer to text me.

You’ll need an AWS account, and you’ll also need to be okay with SNS not being free. Fortunately, unless you’re going to be sending hundreds of notifications, it’s pretty cheap. Then you need to create an SNS topic and add subscribers to it, which you can do through the AWS web interface.

I set up the SNS topic and SMS notifications with Terraform, a tool for provisioning infrastructure. Here’s a self-contained Terraform config for an SNS topic with SMS notifications:

locals { "phone" = "your phone number" "access_key" = "your aws access key" "secret_key" = "your aws secret key" } provider "aws" { access_key = "${locals.access_key}" secret_key = "${locals.secret_key}" region = "eu-west-1" } resource "aws_sns_topic" "topic_name" { name = "topic-name" } resource "aws_sns_topic_subscription" "topic_name_sms" { topic_arn = "${aws_sns_topic.topic_name.arn}" protocol = "sms" endpoint = "${locals.phone}" }

In my actual Terraform configuration, the phone number and keys are in a file which isn’t checked into the repository. Unfortunately Terraform can’t set up email subscriptions, as they need to be manually confirmed. So I had to set that up via the AWS web interface.

You can send test messages through the AWS web interface, so try that to make sure everything is working.

SNS exposes a web endpoint, so the “simplest” way to send a message to your topic would be to `curl`

that. I decided to use the excellent boto3 library for Python instead.

I quickly whipped together this script, which I store in `~/bin/aws-sns`

:

#! /usr/bin/env nix-shell #! nix-shell -i python3 -p python3Packages.boto3 ''' A script to push a message from stdin to a SNS topic. ''' import argparse import boto3 import sys arg_parser = argparse.ArgumentParser(description=__doc__) arg_parser.add_argument( '-t', dest='topic', required=True, help='Topic ARN.') arg_parser.add_argument( '-s', dest='subject', required=True, help='Subject for email.') arg_parser.add_argument( '-R', dest='region', required=False, help='Region to use.', default='eu-west-1') parsed_args = arg_parser.parse_args() # boto3 checks for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env # vars automatically. client = boto3.client('sns', region_name=parsed_args.region) # message body is stdin message = sys.stdin.read() response = client.publish( TopicArn=parsed_args.topic, Subject=parsed_args.subject, Message=message ) print('Message ID: %s' % response['MessageId'])

This is a nix-shell script, which fetches boto3 automatically when invoked. If you’re not a nix user, you’d do the usual virtualenv/source/pip dance.

You’ll need to create a user in the AWS web interface with permissions to poke SNS, and note down their access key and secret key. With those keys, and the ARN of your SNS topic, you should be able to send a message from the command line:

$ export AWS_ACCESS_KEY_ID="foo" $ export AWS_SECRET_ACCESS_KEY="bar" $ echo "Hello, world" | aws-sns -t "baz" -s "Test Message"

Now we have the alerting, so we just need the monitoring. Firstly we need a script to check whatever condition we care about (zpool status in my case), and to call the SNS script if it’s not good.

Here’s a self-contained zpool script:

#!/usr/bin/env bash export AWS_ACCESS_KEY_ID="foo" export AWS_SECRET_ACCESS_KEY="bar" ZFS_TOPIC_ARN="baz" if [[ "`zpool status -x`" != "all pools are healthy" ]]; then zpool status | aws-sns -t "$ZFS_TOPIC_ARN" -s "zfs zpool status" fi

The final piece of the puzzle is a systemd timer (or cronjob, whatever your system uses) to periodically run the script. I have mine run every 12 hours. Here’s the service definition from my NixOS config:

systemd.timers.monitoring-scripts = { wantedBy = [ "timers.target" ]; timerConfig = { OnCalendar = "0/12:00:00"; }; }; systemd.services.monitoring-scripts = { description = "Run monitoring scripts"; serviceConfig.WorkingDirectory = "/home/barrucadu/monitoring-scripts"; serviceConfig.ExecStart = "${pkgs.zsh}/bin/zsh --login -c ./monitor.sh"; serviceConfig.User = "barrucadu"; serviceConfig.Group = "users"; };

Which generates this systemd timer:

[Unit] [Timer] OnCalendar=0/12:00:00

And this unit (ignore the scary nix paths):

[Unit] Description=Run monitoring scripts [Service] Environment="LOCALE_ARCHIVE=/nix/store/vg0s4sl74f5ik64wrrx0q9n6m48vvmgs-glibc-locales-2.26-131/lib/locale/locale-archive" Environment="PATH=/nix/store/cb3slv3szhp46xkrczqw7mscy5mnk64l-coreutils-8.29/bin:/nix/store/364b5gkvgrm87bh1scxm5h8shp975n0r-findutils-4.6.0/bin:/nix/store/s63b2myh6rxfl4aqwi9yxd6rq66djk33-gnugrep-3.1/bin:/nix/store/navldm477k3ar6cy0zlw9rk43i459g69-gnused-4.4/bin:/nix/store/f9dbl8y4zjgr81hs3y3zf187rqv83apz-systemd-237/bin:/nix/store/cb3slv3szhp46xkrczqw7mscy5mnk64l-coreutils-8.29/sbin:/nix/store/364b5gkvgrm87bh1scxm5h8shp975n0r-findutils-4.6.0/sbin:/nix/store/s63b2myh6rxfl4aqwi9yxd6rq66djk33-gnugrep-3.1/sbin:/nix/store/navldm477k3ar6cy0zlw9rk43i459g69-gnused-4.4/sbin:/nix/store/f9dbl8y4zjgr81hs3y3zf187rqv83apz-systemd-237/sbin" Environment="TZDIR=/nix/store/brib029xs79az5vhjd5nhixp1l39ni31-tzdata-2017c/share/zoneinfo" ExecStart=/nix/store/77bsskn86yf6h11mx96xkxm9bqv42kqg-zsh-5.5.1/bin/zsh --login -c ./monitor.sh Group=users User=barrucadu WorkingDirectory=/home/barrucadu/monitoring-scripts

The only thing left to do was to test the whole set-up by simulating a hardware failure.

I powered off nyarlathotep, unplugged a drive, and booted it back up again. I then ran the monitoring script directly, to ensure that it worked, and then waited until midnight (which was closer than noon, at the time I was doing this) to check that the timer worked.

Both SMSes and emails came through:

]]>This got me thinking about *market values*. If I want to see the current market value of all my assets, I need to convert them all to the same currency, using a recent exchange rate. So I now have a script to fetch, once a day, exchange rates between £ and everything else:

P 2018-05-30 BTC £5501.58 P 2018-05-30 ETH £413.01 P 2018-05-30 LTC £87.85 P 2018-05-30 EUR £0.8775 P 2018-05-30 JPY £0.0069 P 2018-05-30 USD £0.7531 P 2018-05-30 VANEA £210.24

My script exports market values to influxdb, so I can see how the market value of my assets (in £) has changed over time. Great!

But what if I want to see the market value in a currency other than £? Like USD, for instance? The problem is that I have all these exchange rates:

But I don’t have, say, the exchange rate from EUR to USD.

Well it turns out that the reflexive-symmetric-transitive closure of that graph is just the thing I want! It looks pretty nasty with 7 currencies, so here it is with just 3:

Let’s see how to calculate those `?`

s.

I could pull in a functional graph library, but the graphs I’m dealing with are so small that I may as well just implement the few operations I need myself.

A graph is essentially a function `node -> node -> Maybe label`

:

import Data.Map (Map) import qualified Data.Map as M type Graph node label = Map node (Map node label)

We need an empty graph and, given a graph, we need to be able to add nodes and edges. As our nodes are the keys in the map, they need to be orderable.

-- | A graph with no nodes or edges. empty :: Ord n => Graph n l empty = M.empty -- | Add a node to a graph. addNode :: Ord n => n -> Graph n l -> Graph n l addNode n = M.insertWith (\_ old -> old) n M.empty

We don’t allow duplicate edges, as that means we have two exchange rates between the same pair of currencies, which doesn’t make much sense. So adding edges is a little more involved, as the edge might already exist:

-- | Add an edge to a graph, combining edges if they exist. -- -- If the source node doesn't exist, does not change the graph. addEdge :: Ord n => (l -> l -> l) -- ^ Function to combine edge labels. -> n -- ^ Source node. -> n -- ^ Target node. -> l -- ^ New label. -> Graph n l -> Graph n l addEdge combine from to label graph = case M.lookup from graph of Just edges -> let edges' = M.insertWith combine to label edges in M.insert from edges' graph Nothing -> graph

Ok, so we can represent our currency graph. Now we need to compute the reflexive-symmetric-transitive closure.

Reflexivity lets us go from a currency to itself:

-- | Take the reflexive closure by adding edges with the given label -- where missing. reflexiveClosure :: Ord n => l -> Graph n l -> Graph n l reflexiveClosure label graph = foldr (.) id [ addEdge (\_ old -> old) nA nA label | nA <- M.keys graph ] graph

If we know a exchange rate from A to B, symmetry gives us an exchange rate from B to A:

-- | Take the symmetric closure by adding new edges, transforming -- existing labels. symmetricClosure :: Ord n => (l -> l) -> Graph n l -> Graph n l symmetricClosure mk graph = foldr (.) id [ addEdge (\_ old -> old) nB nA (mk lAB) | (nA, edges) <- M.assocs graph , (nB, lAB) <- M.assocs edges ] graph

If we know an exchange rate from A to B, and from B to C, transitivity gives us an exchange rate from A to C:

-- | Take the transitive closure by adding new edges, combining -- existing labels. transitiveClosure :: (Ord n, Eq l) => (l -> l -> l) -> Graph n l -> Graph n l transitiveClosure combine = fixEq step where fixEq f = find . iterate f where find (a1:a2:as) | a1 == a2 = a1 | otherwise = find (a2:as) step graph = foldr (.) id [ addEdge (\_ old -> old) nA nC (combine lAB lBC) | (nA, edges) <- M.assocs graph , (nB, lAB) <- M.assocs edges , (nC, lBC) <- M.assocs (M.findWithDefault M.empty nB graph) ] graph

Exchange rates have three properties which we can make use of:

Any currency has an exchange rate with itself of 1.

If we have an exchange rate of

`x`

from A to B, then the rate from B to A is`1/x`

.If we have an exchange rate of

`x`

from A to B, and an exchange rate of`y`

from B to C, then the rate from A to C is`x*y`

.

So, given our graph of exchange rates, we can fill in the blanks like so:

-- | Fill in the blanks in an exchange rate graph. completeRates :: (Ord n, Eq l, Fractional l) => Graph n l -> Graph n l completeRates = transitiveClosure (*) . symmetricClosure (1/) . reflexiveClosure 1

There’s also a fourth property we can assume in reality:

- Any two paths between the same two currencies work out to the same exchange rate.

Otherwise we could make a profit by going around in a circle, and I’m sure someone would have noticed that already and made a lot of money. In our implementation however, we can’t assume that. Exchange rates available online have limited precision, and rounding errors will introduce more problems. But in general things will be close, so it doesn’t matter too much from the perspective of getting a rough idea of our personal finances.

So now I can look at my total assets in yen and feel like a millionaire:

]]>I do not like mocking, and think it often does more harm than good unless you are very careful about what your test is actually testing.

Let’s say your program involves loading some data from disk, and you use a library to do this loading. Let’s say that there are a few ways in which a file can be invalid, and these are each signalled by the library raising a different exception.

Your code might look like this:

def calculate_thing ExternalLibrary::Reader.new("data_file").frobnicate rescue ExternalLibrary::MalformedData, ExternalLibrary::UnsupportedExtension nil end

And your test might look like this:

def calculate_thing_handles_file_errors errors = %w(ExternalLibrary::MalformedData ExternalLibrary::UnsupportedExtension) errors.each do |err| ExternalLibrary::Reader.any_instance.stubs(:frobnicate).raises(err.constantize) assert_nil calculate_thing end end

This looks good: you’re catching exceptions in your program, and your test is throwing those and checking that they are handled. But what is this *really* testing?

The test is obviously correct, which isn’t necessarily a bad thing as it guards against the code changing, but does it *really* test that you handle errors from the external library? I don’t think so. If a new version of `ExternalLibrary`

comes along and adds a third exception type, this test will not help you.

This test guards against the exception list in the code being changed, but does *not* check that all errors from the external library are handled.

The main problem with mocking is that it is very easy to write a reasonable test, and then to derive more confidence from it than you should.

Whenever you artificially change the behaviour of something, you need to be very clear about what your test is actually testing. It is much better to avoid the change if possible, possibly at the price of a more complex (but more realistic) test.

There’s a lesser problem that it’s easy to write a mock which doesn’t exercise all the behaviour your program expects (imagine `calculate_thing`

handled the two exceptions differently, but your mock only threw one of them, for example). This problem can be overcome with branch coverage.

It’s also something that dejafu does not do.

I’m going to use the “stores are transitively visible” litmus test as a running example. Here it is:

import qualified Control.Monad.Conc.Class as C import Test.DejaFu.Internal import Test.DejaFu.SCT import Test.DejaFu.SCT.Internal.DPOR import Test.DejaFu.Types import Test.DejaFu.Utils storesAreTransitivelyVisible :: C.MonadConc m => m (Int, Int, Int) storesAreTransitivelyVisible = do x <- C.newCRef 0 y <- C.newCRef 0 j1 <- C.spawn (C.writeCRef x 1) j2 <- C.spawn (do r1 <- C.readCRef x; C.writeCRef x 1; pure r1) j3 <- C.spawn (do r2 <- C.readCRef y; r3 <- C.readCRef x; pure (r2,r3)) (\() r1 (r2,r3) -> (r1,r2,r3)) <$> C.readMVar j1 <*> C.readMVar j2 <*> C.readMVar j3

I picked this one because it’s kind of arbitrarily complex. It’s a small test, but it’s for the relaxed memory implementation, so there’s a lot going on. It’s a fairly dense test.

I’m now going to define a metric of trace complexity which I’ll justify in a moment:

complexity :: Trace -> (Int, Int, Int, Int) complexity = foldr go (0,0,0,0) where go (SwitchTo _, _, CommitCRef _ _) (w, x, y, z) = (w+1, x+1, y, z) go (Start _, _, CommitCRef _ _) (w, x, y, z) = (w+1, x, y+1, z) go (Continue, _, CommitCRef _ _) (w, x, y, z) = (w+1, x, y, z+1) go (SwitchTo _, _, _) (w, x, y, z) = (w, x+1, y, z) go (Start _, _, _) (w, x, y, z) = (w, x, y+1, z) go (Continue, _, _) (w, x, y, z) = (w, x, y, z+1)

Using the `183-shrinking`

branch, we can now get the first trace for every distinct result, along with its complexity:

results :: Way -> MemType -> IO () results way memtype = do let settings = set lequality (Just (==)) $ fromWayAndMemType way memtype res <- runSCTWithSettings settings storesAreTransitivelyVisible flip mapM_ res $ \(efa, trace) -> putStrLn (show efa ++ "\t" ++ showTrace trace ++ "\t" ++ show (complexity trace))

Here are the results for systematic testing:

λ> results (systematically defaultBounds) SequentialConsistency Right (1,0,1) S0------------S1---S0--S2-----S0--S3-----S0-- (0,0,7,24) Right (0,0,1) S0------------S2-----S1---S0---S3-----S0-- (0,0,6,24) Right (0,0,0) S0------------S2-P3-----S1---S0--S2----S0--- (0,1,6,23) Right (1,0,0) S0------------S3-----S1---S0--S2-----S0--- (0,0,6,24) λ> results (systematically defaultBounds) TotalStoreOrder Right (1,0,1) S0------------S1---S0--S2-----S0--S3-----S0-- (0,0,7,24) Right (0,0,1) S0------------S1-P2-----S1--S0---S3-----S0-- (0,1,6,23) Right (0,0,0) S0------------S1-P2---P3-----S1--S0--S2--S0--- (0,2,6,22) Right (1,0,0) S0------------S1-P3-----S1--S0--S2-----S0--- (0,1,6,23) λ> results (systematically defaultBounds) PartialStoreOrder Right (1,0,1) S0------------S1---S0--S2-----S0--S3-----S0-- (0,0,7,24) Right (0,0,1) S0------------S1-P2-----S1--S0---S3-----S0-- (0,1,6,23) Right (0,0,0) S0------------S1-P2---P3-----S1--S0--S2--S0--- (0,2,6,22) Right (1,0,0) S0------------S1-P3-----S1--S0--S2-----S0--- (0,1,6,23)

Pretty messy, right? Here’s the results for *random* testing:

λ> results (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0-----P1-P0----P2-P1-P0-P3-P1-S2-P3--P0-P3-P0-P3-S2-P0-S2-P0--P2-S0- (0,15,5,9) Right (0,0,1) S0-------P2-P1-P2-P0--P2-P0-P1-P0---S2-P3-P0-P2-S3---P1-S3-S0-- (0,12,5,12) Right (1,0,0) S0------------S3-----S1-P2-P1-P0--S2---P1-S0--- (0,4,5,20) Right (0,0,0) S0---------P2-P0--P3-P0-S3--P2-P3-P2--P3-S2-S1--P0---- (0,9,4,15) λ> results (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0-----P1--P0-P1-S0-P2--C-S0---P2-P3-P2--S3-P0-P3-P0---S3-P0-P3-S0- (1,13,6,11) Right (0,0,1) S0----P1-P0-----P2--P0--P2-P0-S2--S3-P1-P0---S1-S3----S0-- (0,8,6,16) Right (0,0,0) S0--------P2-P0--P3-P2-P0-P3-P2-C-S0-S3---S2--S1-C-S1-P0---- (2,10,6,14) λ> results (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0-----P1--P0-P1-S0-P2--C-S0---P2-P3-P2--S3-P0-P3-P0---S3-P0-P3-S0- (1,13,6,11) Right (0,0,1) S0----P1-P0-----P2--P0--P2-P0-S2--S3-P1-P0---S1-S3----S0-- (0,8,6,16) Right (0,0,0) S0--------P2-P0--P3-P2-P0-P3-P2-C-S0-S3---S2--S1-C-S1-P0---- (2,10,6,14)

Yikes!

The complexity metric I defined counts four things:

- The number of relaxed-memory commit actions
- The number of pre-emptive context switches
- The number of non-pre-emptive context switches
- The number of continues

I would much rather read a long trace where the only context switches are when threads block, than a short one which is rapidly jumping between threads. So, given two equivalent traces, I will always prefer the one with a lexicographically smaller complexity-tuple.

The key idea underpinning trace simplification is that dejafu can tell when two scheduling decisions can be swapped without changing the behaviour of the program. I talked about this idea in the Using Hedgehog to Test Déjà Fu memo. So we can implement transformations which are guaranteed to preserve semantics *without needing to verify this by re-running the test case*.

Although we don’t need to re-run the test case at all, the `183-shrinking`

branch currently does, but only once at the end after the minimum has been found. This is because it’s easier to generate a simpler sequence of scheduling decisions and use dejafu to produce the corresponding trace than it is to produce a simpler trace directly. This is still strictly better than a typical shrinking algorithm, which would re-run the test case after *each* shrinking step, rather than only at the end.

Rather than drag this out, here’s what those random traces simplify to:

resultsS :: Way -> MemType -> IO () resultsS way memtype = do let settings = set lsimplify True . set lequality (Just (==)) $ fromWayAndMemType way memtype res <- runSCTWithSettings settings storesAreTransitivelyVisible flip mapM_ res $ \(efa, trace) -> putStrLn (show efa ++ "\t" ++ showTrace trace ++ "\t" ++ show (complexity trace))

λ> resultsS (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0----------P1---S2--P3-----S0---S2---S0--- (0,2,5,22) Right (0,0,1) S0----------P2-P1-P2-P1--S0---S2---S3-----S0--- (0,4,5,20) Right (1,0,0) S0------------S3-----S1---S0--S2----P0--- (0,1,5,23) Right (0,0,0) S0------------S3--P2-----S3---S1--P0---- (0,2,4,22) λ> resultsS (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0----------P1---S2-----S0----S3-----S0-- (0,1,5,23) Right (0,0,1) S0----------P1-P2-----S0--S1--S0---S3-----S0-- (0,2,6,22) Right (0,0,0) S0----------P2--P3-----S0--S2---S1--P0---- (0,3,4,21) λ> resultsS (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0----------P1---S2-----S0----S3-----S0-- (0,1,5,23) Right (0,0,1) S0----------P1-P2-----S0--S1--S0---S3-----S0-- (0,2,6,22) Right (0,0,0) S0----------P2--P3-----S0--S2---S1--P0---- (0,3,4,21)

This is much better.

There are two simplification phases: a preparation phase, which puts the trace into a normal form and prunes unnecessary commits; and an iteration phase, which repeats a step function until a fixed point is reached (or the iteration limit is).

The preparation phase has two steps: first we put the trace into *lexicographic normal form*, then we prune unnecessary commits.

We put a trace in lexicographic normal form by sorting by thread ID, where only independent actions can be swapped:

lexicoNormalForm :: MemType -> [(ThreadId, ThreadAction)] -> [(ThreadId, ThreadAction)] lexicoNormalForm memtype = go where go trc = let trc' = bubble initialDepState trc in if trc == trc' then trc else go trc' bubble ds (t1@(tid1, ta1):t2@(tid2, ta2):trc) | independent ds tid1 ta1 tid2 ta2 && tid2 < tid1 = bgo ds t2 (t1 : trc) | otherwise = bgo ds t1 (t2 : trc) bubble _ trc = trc bgo ds t@(tid, ta) trc = t : bubble (updateDepState memtype ds tid ta) trc

If simplification only put traces into lexicographic normal form, we would get these results:

λ> resultsS (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0-----------P1---S2--P0--S2--P0-P3----P0-- (0,5,3,19) Right (0,0,1) S0-----------P2-P1-P2-P1-P0--S2--P0-P1-S2-S3----P0-- (0,8,4,16) Right (1,0,0) S0------------S3----P1--P0--S1-S2----P0--- (0,3,4,21) Right (0,0,0) S0------------S2-P3--P2----S3--P1--P0---- (0,4,3,20) λ> resultsS (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0-------P1---S2--C-S0-----P2--P0--S2-S3----P0-- (1,5,5,19) Right (0,0,1) S0-----------P1-P2--P0-S1-P0-P2--P0--S1-S2-S3----P0-- (0,7,5,17) Right (0,0,0) S0-----------P2---P3--C-S0-S2--S3--P1-C-S1-P0---- (2,6,5,18) λ> resultsS (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0-------P1---S2--C-S0-----P2--P0--S2-S3----P0-- (1,5,5,19) Right (0,0,1) S0-----------P1-P2--P0-S1-P0-P2--P0--S1-S2-S3----P0-- (0,7,5,17) Right (0,0,0) S0-----------P2---P3--C-S0-S2--S3--P1-C-S1-P0---- (2,6,5,18)

These are better than they were, but we can do better still.

After putting the trace into lexicographic normal form, we delete any commit actions which are followed by any number of independent actions and then a memory barrier:

dropCommits :: MemType -> [(ThreadId, ThreadAction)] -> [(ThreadId, ThreadAction)] dropCommits SequentialConsistency = id dropCommits memtype = go initialDepState where go ds (t1@(tid1, ta1@(CommitCRef _ _)):t2@(tid2, ta2):trc) | isBarrier (simplifyAction ta2) = go ds (t2:trc) | independent ds tid1 ta1 tid2 ta2 = t2 : go (updateDepState memtype ds tid2 ta2) (t1:trc) go ds (t@(tid,ta):trc) = t : go (updateDepState memtype ds tid ta) trc go _ [] = []

Such commits don’t affect the behaviour of the program at all, as all buffered writes gets flushed when the memory barrier happens.

If simplification only did the preparation phase, we would get these results:

λ> resultsS (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0-----------P1---S2--P0--S2--P0-P3----P0-- (0,5,3,19) Right (0,0,1) S0-----------P2-P1-P2-P1-P0--S2--P0-P1-S2-S3----P0-- (0,8,4,16) Right (1,0,0) S0------------S3----P1--P0--S1-S2----P0--- (0,3,4,21) Right (0,0,0) S0------------S2-P3--P2----S3--P1--P0---- (0,4,3,20) λ> resultsS (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0-------P1---S2--P0-----P2--P0--S2-S3----P0-- (0,5,4,19) ^-- better than just lexicoNormalForm Right (0,0,1) S0-----------P1-P2--P0-S1-P0-P2--P0--S1-S2-S3----P0-- (0,7,5,17) Right (0,0,0) S0-----------P2---P3--P0-S2--S3--P1--P0---- (0,5,3,19) ^-- better than just lexicoNormalForm λ> resultsS (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0-------P1---S2--P0-----P2--P0--S2-S3----P0-- (0,5,4,19) ^-- better than just lexicoNormalForm Right (0,0,1) S0-----------P1-P2--P0-S1-P0-P2--P0--S1-S2-S3----P0-- (0,7,5,17) Right (0,0,0) S0-----------P2---P3--P0-S2--S3--P1--P0---- (0,5,3,19) ^-- better than just lexicoNormalForm

The iteration phase attempts to reduce context switching by pushing actions forwards, or pulling them backwards, through the trace.

If we have the trace `[(tid1, act1), (tid2, act2), (tid1, act3)]`

, where `act2`

and `act3`

are independent, the “pull back” transformation would re-order that to `[(tid1, act1), (tid1, act3), (tid2, act2)]`

.

In contrast, if `act1`

and `act2`

were independent, the “push forward” transformation would re-order that to `[(tid2, act2), (tid1, act1), (tid1, act3)]`

. The two transformations are almost, but not quite opposites.

Pull-back walks through the trace and, at every context switch, looks forward to see if there is a single action of the original thread it can put before the context switch:

pullBack :: MemType -> [(ThreadId, ThreadAction)] -> [(ThreadId, ThreadAction)] pullBack memtype = go initialDepState where go ds (t1@(tid1, ta1):trc@((tid2, _):_)) = let ds' = updateDepState memtype ds tid1 ta1 trc' = if tid1 /= tid2 then maybe trc (uncurry (:)) (findAction tid1 ds' trc) else trc in t1 : go ds' trc' go _ trc = trc findAction tid0 = fgo where fgo ds (t@(tid, ta):trc) | tid == tid0 = Just (t, trc) | otherwise = case fgo (updateDepState memtype ds tid ta) trc of Just (ft@(ftid, fa), trc') | independent ds tid ta ftid fa -> Just (ft, t:trc') _ -> Nothing fgo _ _ = Nothing

Push-forward walks through the trace and, at every context switch, looks forward to see if the last action of the original thread can be put at its next execution:

pushForward :: MemType -> [(ThreadId, ThreadAction)] -> [(ThreadId, ThreadAction)] pushForward memtype = go initialDepState where go ds (t1@(tid1, ta1):trc@((tid2, _):_)) = let ds' = updateDepState memtype ds tid1 ta1 in if tid1 /= tid2 then maybe (t1 : go ds' trc) (go ds) (findAction tid1 ta1 ds trc) else t1 : go ds' trc go _ trc = trc findAction tid0 ta0 = fgo where fgo ds (t@(tid, ta):trc) | tid == tid0 = Just ((tid0, ta0) : t : trc) | independent ds tid0 ta0 tid ta = (t:) <$> fgo (updateDepState memtype ds tid ta) trc | otherwise = Nothing fgo _ _ = Nothing

The iteration process just repeats `pushForward memtype . pullBack memtype`

.

If it only used `pullBack`

, we would get these results:

λ> resultsS (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0-----------P1---S2---P0--S2--S0-P3-----S0-- (0,3,5,21) Right (0,0,1) S0-----------P2-P1-P2--P1--S0--S2--S0-P3-----S0-- (0,5,5,19) Right (1,0,0) S0------------S3-----S1---S0--S2----P0--- (0,1,5,23) Right (0,0,0) S0------------S2-P3---P2----S3--S1--P0---- (0,3,4,21) λ> resultsS (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0-----------P1---S2-----S0---S3-----S0-- (0,1,5,23) Right (0,0,1) S0-----------P1-P2-----S0-S1--S0---S3-----S0-- (0,2,6,22) Right (0,0,0) S0-----------P2---P3-----S0-S2--S1--P0---- (0,3,4,21) λ> resultsS (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0-----------P1---S2-----S0---S3-----S0-- (0,1,5,23) Right (0,0,1) S0-----------P1-P2-----S0-S1--S0---S3-----S0-- (0,2,6,22) Right (0,0,0) S0-----------P2---P3-----S0-S2--S1--P0---- (0,3,4,21)

With no exception, iterating `pullBack`

is an improvement over just doing preparation.

If it only used `pushForward`

, we would get these results:

λ> resultsS (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0-------P1---S2--P0------S2--P3----P0--- (0,4,3,20) Right (0,0,1) S0-------P2-P1-P2-P1-P0------S1-S2---S3----P0--- (0,6,4,18) Right (1,0,0) S0------------S3----P1--P0--S1-S2----P0--- (0,3,4,21) ^-- no improvement over preparation Right (0,0,0) S0------------S3--P2-----S3--P1--P0---- (0,3,3,21) λ> resultsS (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0----P1---S0---P2----P0-------S2-S3----P0-- (0,4,4,20) Right (0,0,1) S0-------P1-P2--P0-----S1-P2--P0---S1-S2-S3----P0-- (0,6,5,18) Right (0,0,0) S0----------P2--P3--P0--S2---S3--P1--P0---- (0,5,3,19) ^-- no improvement over preparation λ> resultsS (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0----P1---S0---P2----P0-------S2-S3----P0-- (0,4,4,20) Right (0,0,1) S0-------P1-P2--P0-----S1-P2--P0---S1-S2-S3----P0-- (0,6,5,18) Right (0,0,0) S0----------P2--P3--P0--S2---S3--P1--P0---- (0,5,3,19) ^-- no improvement over preparation

With three exceptions, where the traces didn’t change, iterating `pushForward`

is an improvement over just doing preparation.

We’ve already seen the results if we combine them:

λ> resultsS (randomly (mkStdGen 0) 100) SequentialConsistency Right (1,0,1) S0----------P1---S2--P3-----S0---S2---S0--- (0,2,5,22) Right (0,0,1) S0----------P2-P1-P2-P1--S0---S2---S3-----S0--- (0,4,5,20) Right (1,0,0) S0------------S3-----S1---S0--S2----P0--- (0,1,5,23) ^-- same as pullBack, which is better than pushForward Right (0,0,0) S0------------S3--P2-----S3---S1--P0---- (0,2,4,22) λ> resultsS (randomly (mkStdGen 0) 100) TotalStoreOrder Right (1,0,1) S0----------P1---S2-----S0----S3-----S0-- (0,1,5,23) ^-- same as pullBack, which is better than pushForward Right (0,0,1) S0----------P1-P2-----S0--S1--S0---S3-----S0-- (0,2,6,22) ^-- same as pullBack, which is better than pushForward Right (0,0,0) S0----------P2--P3-----S0--S2---S1--P0---- (0,3,4,21) λ> resultsS (randomly (mkStdGen 0) 100) PartialStoreOrder Right (1,0,1) S0----------P1---S2-----S0----S3-----S0-- (0,1,5,23) ^-- same as pullBack, which is better than pushForward Right (0,0,1) S0----------P1-P2-----S0--S1--S0---S3-----S0-- (0,2,6,22) ^-- same as pullBack, which is better than pushForward Right (0,0,0) S0----------P2--P3-----S0--S2---S1--P0---- (0,3,4,21)

I think what I have right now is pretty good. It’s definitely a vast improvement over not doing any simplification.

*But*, no random traces get simplified to the corresponding systematic traces, which is a little disappointing. I think that’s because the current passes just try to reduce context switches of any form, whereas really I want to reduce pre-emptive context switches more than non-pre-emptive ones.