Probabilistic programming with continuations

In this post I’ll explain something folkloric: that you can pretend that the continuation monad is a probability monad, and do probabilistic programming in it. Unlike more obvious representations of probability like the one in Numeric.Probability.Distribution via lists, this way works equally well for continuous as for discrete distributions (as long as you don’t mind numerical integration). The post is a literate Haskell program, which is an expanded version of this repository. It’s a sort of sequel to my very first blog post, Abusing the continuation monad.

I mentally call this idea “synthetic measure theory” or sometimes “synthetic probability”, although as far as I know it is not related to various Google hits for those terms such as this, or this. (But one of the hits is this paper, which is probably related.)

Continue reading “Probabilistic programming with continuations”

Abusing the continuation monad

(Recall that monads are not a good topic for your first blog post.)

I intend to bootstrap a blog by writing about 2 of my old papers, Monad Transformers for Backtracking Search and The Selection Monad as a CPS Transformation. (Wall Street will be spared for the time being.)

I’m going to write about this little bit of Haskell code:

import Control.Monad.Cont
sat n = runCont $ sequence $ replicate n $ cont $ \k -> k $ k True

It’s a SAT solver: you give it a boolean function, and the number of variables to search, and it decides whether that function will ever return true for any values of those variables.

How does this work? Haven’t the foggiest. If anyone can explain what it does at runtime, there’s probably a research paper in it for you. If you can also predict how long it takes, that’s a big deal.

Continue reading “Abusing the continuation monad”