Skip to content

Reflections 2024

github-actions[bot] edited this page Jan 21, 2025 · 83 revisions

2016 / 2018 / 2019 / 2020 / 2021 / 2022 / 2023 / 2024

Table of Contents

Day 1

Top / Prompt / Code / Standalone

Day 1 is always a Haskell warmup :)

One nice way to get both lists is to parse [(Int, Int)] and use unzip :: [(a,b)] -> ([a], [b])], getting a list of pairs into a pair of lists.

Once we have our two [Int]s, part 1 is a zip:

part1 :: [Int] -> [Int] -> Int
part1 xs ys = sum $ map abs (zipWith subtract xs ys)

Part 2 we can build a frequency map and then map a lookup:

import qualified Data.Map as M

part2 :: [Int] -> [Int] -> Int
part2 xs ys = sum $ map (\x -> x * M.findWithDefault 0 x freqMap) xs
  where
    freqMap :: M.Map Int Int
    freqMap = M.fromListWith (+) (map (,1) ys)

Day 1 Benchmarks

>> Day 01a
benchmarking...
time                 393.8 μs   (392.4 μs .. 394.9 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 393.0 μs   (392.4 μs .. 393.5 μs)
std dev              1.986 μs   (1.684 μs .. 2.403 μs)

* parsing and formatting times excluded

>> Day 01b
benchmarking...
time                 181.5 μs   (181.0 μs .. 182.3 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 182.2 μs   (181.9 μs .. 182.7 μs)
std dev              1.178 μs   (755.9 ns .. 1.950 μs)

* parsing and formatting times excluded

Day 2

Top / Prompt / Code / Standalone

Again a straightforward Haskell day. I have a utility function I use for a bunch of these:

countTrue :: (a -> Bool) -> [a] -> Int
countTrue p = length . filter p

So we can run countTrue over our list of [Int]. The predicate is:

import Data.Ix (inRange)

predicate :: [Int] -> Bool
predicate xs =
  all (inRange (1, 3)) diffies
    || all (inRange (1, 3) . negate) diffies
  where
    diffies = zipWith subtract xs (drop 1 xs)

It's a straightforward application of countTrue predicate for part 1. For part 2, we can see if any of the possibilities match the predicate.

part1 :: [[Int]] -> Int
part1 = countTrue predicate

part2 :: [[Int]] -> Int
part2 = countTrue \xs ->
  let possibilities = xs : zipWith (++) (inits xs) (tail (tails xs))
   in any predicate possibilities

inits [1,2,3] gives us [], [1], [1,2], and [1,2,3], and tail (tails xs) gives us [2,3], [3], and []. So we can zip those up to get [2,3], [1,3], and [2,3]. We just need to make sure we add back in our original xs.

This is probably the simplest way to write, but, there's something cute/recursive we can do using the list "monad" to generate all possibilities: for each x:xs, we can either "drop here" or "drop later":

tryDrops :: [a] -> [[a]]
tryDrops = \case
  [] -> [[]]
  x : xs -> xs : ((x :) <$> tryDrops xs)
        --  ^ drop here
        --        ^ drop later

And this simplifies part 2 significantly:

part2 :: [[Int]] -> Int
part2 = countTrue $ any predicate . tryDrops

Day 2 Benchmarks

>> Day 02a
benchmarking...
time                 49.05 μs   (48.35 μs .. 49.79 μs)
                     0.993 R²   (0.981 R² .. 0.999 R²)
mean                 49.39 μs   (48.18 μs .. 52.99 μs)
std dev              5.746 μs   (1.093 μs .. 10.17 μs)
variance introduced by outliers: 87% (severely inflated)

* parsing and formatting times excluded

>> Day 02b
benchmarking...
time                 425.5 μs   (424.0 μs .. 426.9 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 424.3 μs   (423.4 μs .. 426.0 μs)
std dev              3.680 μs   (2.733 μs .. 6.026 μs)

* parsing and formatting times excluded

Day 3

Top / Prompt / Code / Standalone

You can think of the whole thing is essentially a state machine / finite automata. For part 1 it's straightforward: chomp as many mul(x,y) as possible, summing the muls:

import qualified Control.Monad.Combinators as P
import qualified Text.Megaparsec as P
import qualified Text.Megaparsec.Char as P
import qualified Text.Megaparsec.Char.Lexer as PL

parseMul :: P.Parsec v String Int
parseMul = product <$> P.between "mul(" ")" (PL.decimal `P.sepBy` ",")

part1 :: Parsec v Int
part1 = sum <$> many (dropUntil parseMul)

-- | A utility parser combinator I have that skips until the first match
dropUntil :: P.Parsec e s end -> P.Parsec e s end
dropUntil x = P.try (P.skipManyTill P.anySingle (P.try x))

For part 2 the state machine has a "on or off" state: on the "off" state, search for the next don't. On the "on" state, search for the next mul and continue on, or the next don't and continue off.

part2 :: P.Parsec v String Int
part2 = sum <$> goEnabled
  where
    goDisabled = P.option [] . dropUntil $ "do()" *> goEnabled
    goEnabled = P.option [] . dropUntil $
      P.choice
        [ "don't()" *> goDisabled n
        , (:) <$> parseMul <*> goEnabled
        ]

Day 3 Benchmarks

>> Day 03a
benchmarking...
time                 1.173 ms   (1.164 ms .. 1.181 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 1.179 ms   (1.170 ms .. 1.186 ms)
std dev              29.67 μs   (22.62 μs .. 37.85 μs)
variance introduced by outliers: 14% (moderately inflated)

* parsing and formatting times excluded

>> Day 03b
benchmarking...
time                 1.827 ms   (1.809 ms .. 1.860 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 1.792 ms   (1.786 ms .. 1.809 ms)
std dev              28.94 μs   (18.52 μs .. 51.00 μs)

* parsing and formatting times excluded

Day 4

Top / Prompt / Code / Standalone

Here we are matching "stencils" across different windows, so it's always fun to use comonads for this. That's because extend :: (w a -> b) -> w a -> w b lets you automagically convert a function on windows (the w a -> b) to a w a -> w b, the application across every window.

First we parse our input into a Map Point Char, where data V2 a = V2 a a (from the linear library), a tuple type with the correct Num instance that I use for most of these.

Our stencils are (centered around 0,0):

import Linear.V2 (V2(..), (*^))

xmas :: [Map (V2 Int) Char]
xmas =
    [ M.fromList [(i *^ step, x) | (i, x) <- zip [0 ..] "XMAS"]
    | d <- [V2 1 0, V2 0 1, V2 1 1, V2 (-1) 1]
    , step <- [d, negate d]
    ]

crossMas :: [Map (V2 Int) Char]
crossMas =
    [ M.insert 0 'A' (diag1 <> diag2)
    | diag1 <- M.fromList . zip [V2 (-1) (-1), V2 1 1] <$> ["MS", "SM"]
    , diag2 <- M.fromList . zip [V2 1 (-1), V2 (-1) 1] <$> ["MS", "SM"]
    ]

Now some utility functions to wrap and unwrap our Map (V2 Int) Char into a Store (V2 Int) (Maybe Char) store comonad, so we can use its Comonad instance:

mapToStore :: (Ord k, Num k) => Map k a -> Store k (Maybe a)
mapToStore mp = store (`M.lookup` mp) 0

mapFromStore :: Num k => Set k -> Store k a -> Map k a
mapFromStore ks = experiment \x -> M.fromSet (+ x) ks

Now a function to check if a stencil matches a neighborhood:

checkStencil :: Num k => Map k a -> Store k (Maybe a) -> Bool
checkStencil mp x = all (\(p, expected) -> peeks (+ p) x == Just expected) (M.toList mp)

countWindowMatches :: (Num k, Eq a) => [Map k a] -> Store k (Maybe a) -> Int
countWindowMatches mps x = length $ filter (`matchMap` x) mps

Now we have a Store k (Maybe a) -> Int, which takes a window and gives an Int that is the number of stencil matches at the window origin. The magic of comonad is that now we have extend stencils :: Store k (Maybe a) -> Store k Int, which runs that windowed function across the entire map.

countMatches :: [Map (V2 Int) a] -> Map (V2 Int) Char -> Int
countMatches stencils xs =
    sum . mapFromStore (M.keysSet xs) . extend (matchAnyMap stencils) . mapToStore $ xs

part1 :: Map (V2 Int) Char -> Int
part1 = countMatches xmas

part2 :: Map (V2 Int) Char -> Int
part2 = countMatches crossMas

Day 4 Benchmarks

>> Day 04a
benchmarking...
time                 37.83 ms   (37.05 ms .. 38.53 ms)
                     0.998 R²   (0.991 R² .. 1.000 R²)
mean                 38.29 ms   (37.96 ms .. 39.21 ms)
std dev              1.043 ms   (345.7 μs .. 1.881 ms)

* parsing and formatting times excluded

>> Day 04b
benchmarking...
time                 22.07 ms   (21.94 ms .. 22.20 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 22.06 ms   (21.99 ms .. 22.13 ms)
std dev              156.2 μs   (117.2 μs .. 204.2 μs)

* parsing and formatting times excluded

Day 5

Top / Prompt / Code / Standalone

This one lends itself pretty nicely to basically topologically sorting each page list according to the graph of "X preceeds Y" edges.

If we have a list of (Int, Int) rules, we can build a graph where the nodes are the page numbers and the edges are "X preceeds Y".

Then for each page list, we can filter that graph for only the nodes in that page list, and then toposort it:

import qualified Data.Graph.Inductive as G

sortByRules :: [(Int, Int)] -> [Int] -> [Int]
sortByRules rules = \xs ->
    G.topsort . G.nfilter (`S.member` S.fromList xs) $ ruleGraph
  where
    ruleGraph :: G.Gr () ()
    ruleGraph =
      G.mkUGraph
        (nubOrd $ foldMap (\(x,y) -> [x,y]) rules)
        rules

part1 :: [(Int, Int)] -> [[Int]] -> Int
part1 rules pages = sum
    [ middleVal orig
    | orig <- pages
    , orig == sorter orig
    ]
  where
    sorter = sortByRules rules

part2 :: [(Int, Int)] -> [[Int]] -> Int
part2 rules pages = sum
    [ middleVal sorted
    | orig <- pages
    , let sorted = sorter orig
    , orig /= sorted
    ]
  where
    sorter = sortByRules rules

We write sortByRules with a lambda closure (and name sorters) to ensure that the graph is generated only once and then the closure re-applied for every page list.

One cute way to find the middle value is to traverse the list twice at the same time "in parallel", but one list twice as quickly as the other:

middleVal :: [a] -> a
middleVal xs0 = go xs0 xs0
  where
    go (_:xs) (_:_:ys) = go xs ys
    go (x:_) _ = x

Day 5 Benchmarks

>> Day 05a
benchmarking...
time                 18.31 ms   (18.13 ms .. 18.47 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 18.42 ms   (18.27 ms .. 18.57 ms)
std dev              359.7 μs   (219.9 μs .. 538.4 μs)

* parsing and formatting times excluded

>> Day 05b
benchmarking...
time                 17.68 ms   (17.64 ms .. 17.72 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 17.69 ms   (17.65 ms .. 17.72 ms)
std dev              93.29 μs   (64.40 μs .. 139.0 μs)

* parsing and formatting times excluded

Day 6

Top / Prompt / Code / Standalone

This one features a common staple of Advent of Code: the 2D grid. In this case we can parse it as a Set Point of boulders and an initial starting Point, with type Point = V2 Int from the linear library, which has good Num, Functor, Foldable instances etc.

Then the (possibly infinite) stepping function becomes:

import Data.Finite
import Linear.V2
import qualified Data.Set as S
import qualified Data.Vector.Sized as SV

type Point = V2 Int

stepInDir :: Finite 4 -> Point
stepInDir = SV.index $ SV.fromTuple (V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0)

stepPath :: Int -> S.Set Point -> Point -> [(Point, Finite 4)]
stepPath maxCoord boulders = takeWhile inBounds . iterate go . (,0)
  where
    go (x, d)
      | x' `S.member` boulders = (x, d + 1)
      | otherwise = (x', d)
      where
        x' = x + stepInDir d
    inBounds = all (inRange (0, maxCoord))

part1 :: Set Point -> Point -> Int
part1 boulders = S.size . S.fromList . map fst . stepPath maxCoord boulders
  where
    maxCoord = maximum (foldMap toList boulders)

Here I use Finite 4 to give a cyclic type I can repeatedly rotate, and look up a single step in that direction from 4-vector. In my actual code I use a data type data Dir = North | East | South | West that is essentially the same thing.

For part 2 we can just try to insert new boulders along the original route and count the boulders that give loops. We can use tortoise and hare to do loop detection.

hasLoop :: Eq a => [a] -> Bool
hasLoop xs0 = go xs0 (drop 1 xs0)
  where
    go (x:xs) (y:_:ys) = x == y || go xs ys
    go _ _ = False

part2 :: Set Point -> Point -> Int
part2 boulders p0 = length . filter goodBoulder . nubOrd $ stepPath maxCoord boulders
  where
    maxCoord = maximum (foldMap toList boulders)
    goodBoulder p = p /= p0 && hasLoop (stepPath maxCoord (S.insert p boulders) p)

Overall runs in about 1 second on my machine. You could optimize it a bit by jumping directly to the next boulder. Basically you'd keep a map of x to the y's of all boulders in that column so you can move vertically, and then a map of y to the x's of all boulders in that row so you can move horizontally.

collapseAxes :: Foldable f => f Point -> V2 (Map Int (Set Int))
collapseAxes = foldl' (flip addAxesMap) mempty

addAxesMap :: Point -> V2 (Map Int (Set Int)) -> V2 (Map Int (Set Int))
addAxesMap (V2 x y) (V2 xMaps yMaps) =
  V2
    (M.insertWith (<>) x (S.singleton y) xMaps)
    (M.insertWith (<>) y (S.singleton x) yMaps)

slideAxes :: V2 (Map Int (Set Int)) -> Point -> Finite 4 -> Maybe Point
slideAxes (V2 xMap yMap) (V2 x y) = SV.index $ SV.fromTuple
  ( S.lookupLT y (M.findWithDefault mempty x xMap) <&> \y' -> V2 x (y' + 1)
  , S.lookupGT x (M.findWithDefault mempty y yMap) <&> \x' -> V2 (x' - 1) y
  , S.lookupGT y (M.findWithDefault mempty x xMap) <&> \y' -> V2 x (y' - 1)
  , S.lookupLT x (M.findWithDefault mempty y yMap) <&> \x' -> V2 (x' + 1) y
  )

stepPath' :: V2 (Map Int (Set Int)) -> Point -> [(Point, Finite 4)]
stepPath' as = unfoldr go . (,0)
  where
    go (p, d) = do
      p' <- slideAxes as p d
      pure ((p', d + 1), (p', d + 1))

part2' :: Set Point -> Point -> Int
part2' boulders p0 = length . filter goodBoulder . nubOrd $ stepPath maxCoord boulders
  where
    maxCoord = maximum (foldMap toList boulders)
    axesMap0 = collapseAxes boulders
    goodBoulder p = p /= p0 && hasLoop (stepPath' (addAxesMap p axesMap0) p)

This is cuts the time by about 30x.

Day 6 Benchmarks

>> Day 06a
benchmarking...
time                 1.452 ms   (1.432 ms .. 1.470 ms)
                     0.999 R²   (0.998 R² .. 0.999 R²)
mean                 1.448 ms   (1.440 ms .. 1.462 ms)
std dev              37.35 μs   (24.50 μs .. 53.82 μs)
variance introduced by outliers: 14% (moderately inflated)

* parsing and formatting times excluded

>> Day 06b
benchmarking...
time                 36.06 ms   (35.95 ms .. 36.17 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 35.89 ms   (35.69 ms .. 35.99 ms)
std dev              290.5 μs   (156.0 μs .. 433.8 μs)

* parsing and formatting times excluded

Day 7

Top / Prompt / Code / Standalone

This one works out well as a list monad based search. Essentially you are picking operations where:

targ == (x ? y) ? z

and if those ? operations induce a list monad split, you can then search all of the possible choices:

checkEquation :: [Int -> Int -> Int] -> Int -> [Int] -> Bool
checkEquation ops targ xs = targ `elem` foldl1M branchOnOp xs
  where
    branchOnOp a b = map (\f -> f a b) ops

Then you can do checkEquation [(+),(*)] for part 1 and checkEquation [(+),(*),cat] for part 2.

However, it is kind of helpful to work backwards from the target to see if you can get the initial number. For example, in 292: 11 6 16 20, you can eliminate * as an option for the final operation right off the bat.

So really, you can rephrase the problem as:

x == y ? (z ? targ)

where ? are the inverse operations, but you have some way to easily eliminate operations that don't make sense.

checkBackEquation :: [Int -> Int -> Maybe Int] -> Int -> [Int] -> Bool
checkBackEquation unOps targ (x:xs) = x `elem` foldrM branchOnUnOp targ xs
  where
    branchOnUnOp a b = mapMaybe (\f -> f a b) unOPs

And our un-ops are:

unAdd :: Int -> Int -> Maybe Int
unAdd x y = [y - x | y >= x]

unMul :: Int -> Int -> Maybe Int
unMul x y = [y `div` x | y `mod` x == 0]

unCat :: Int -> Int -> Maybe Int
unCat x y = [d | m == x]
  where
    pow = length . takeWhile (< x) $ iterate (* 10) 1
    (d, m) = y `divMod` (10 ^ pow)

So part 1 is checkBackEquation [unAdd, unMul] and part 2 is checkBackEquation [unAdd, unMul, unCat].

Timing-wise, moving from forwards to backwards brought my times for part 2 from 380ms to 1.2ms.

Day 7 Benchmarks

>> Day 07a
benchmarking...
time                 685.2 μs   (680.2 μs .. 692.3 μs)
                     0.989 R²   (0.975 R² .. 0.999 R²)
mean                 723.5 μs   (701.6 μs .. 756.7 μs)
std dev              94.38 μs   (57.31 μs .. 128.7 μs)
variance introduced by outliers: 84% (severely inflated)

* parsing and formatting times excluded

>> Day 07b
benchmarking...
time                 1.260 ms   (1.258 ms .. 1.262 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.259 ms   (1.258 ms .. 1.260 ms)
std dev              3.710 μs   (2.848 μs .. 4.910 μs)

* parsing and formatting times excluded

Day 8

Top / Prompt / Code / Standalone

Mostly straightforward Haskell, building up the set of all antinodes by iterating over every pair of antennae. The main thing we parameterize over is the way of generating the antinode points from a given pair of locations.

makeAntinodes :: Eq a => Map Point a -> (Point -> Point -> [Point]) -> Set Point
makeAntinodes mp genPts = S.fromList do
  (p1, c1) <- M.toList mp
  (p2, c2) <- M.toList mp
  guard $ p1 /= p2 && c1 == c2
  genPts p1 p2

day08 :: (Point -> Point -> [Point]) -> Map Point Char -> Int
day08 stepper mp = S.size $
    makeAntinodes ants \p1 p2 ->
      takeWhile (`S.member` allPoints) $ stepper p1 p2
  where
    allPoints = M.keysSet mp
    ants = M.filter (/= '.') mp

day08a :: Map Point Char -> Int
day08a = day08 \p1 p2 -> [p2 + p2 - p1]

day08b :: Map Point Char -> Int
day08b = day08 \p1 p2 -> iterate (+ (p2 - p1)) p2

Day 8 Benchmarks

>> Day 08a
benchmarking...
time                 590.0 μs   (587.8 μs .. 592.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 585.1 μs   (583.2 μs .. 586.6 μs)
std dev              5.404 μs   (4.505 μs .. 6.494 μs)

* parsing and formatting times excluded

>> Day 08b
benchmarking...
time                 990.5 μs   (987.5 μs .. 996.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 994.3 μs   (991.8 μs .. 997.7 μs)
std dev              9.220 μs   (8.039 μs .. 11.41 μs)

* parsing and formatting times excluded

Day 9

Top / Prompt / Code / Standalone

Both of these today involve consuming queues, but the nature of the queues are different. For part 1, we consume two queues: the queue of gaps from left to right, and the queue of files from right to left. For part 2, we consume the queue of file blocks from right to left.

We can actually consume the queues in both cases directly into their checksum without going through an intermediate structure, which is kind of convenient too.

First, let's parse the list of numbers into a usable state: for gaps, an IntMap of positions to gap sizes, and for file blocks, an IntMap of positions to id's and fids.

toDiskState :: [a] -> [Int] -> (IntMap Int, IntMap (a, Int))
toDiskState fids =
      IM.mapEither splitter
    . IM.fromList
    . snd
    . mapAccumL go 0
    . zip (intersperse Nothing (Just <$> fids))
  where
    go i (mfid, len) = (i + len, (i, (mfid, len)))
    splitter (mfid, len) = case mfid of
      Nothing -> Left len
      Just fid -> Right (fid, len)

For part 1, the behavior of the queues is non-trivial so it's helpful to write it using explicit recursion. The first queue is the queue of gaps (which we push-back on with a smaller gap length) and the second queue is the queue of reversed single (file slot index, file id) that we pop one-by-one. We also short-circuit to the end if our forward gap indices move past our backwards file indices.

fillGaps
  :: [(Int, Int)]   -- ^ list of (gap starting Index, gap length) left-to-right
  -> [(Int, Int)]   -- ^ list of (single file slot index, file id) right-to-left
  -> Int
fillGaps [] ends = sum $ map (uncurry (*)) ends
fillGaps _ [] = 0
fillGaps ((gapI, gapLen):gaps) ((endI, fid):ends)
  | endI > gapI -> gapI * fid + fillGaps (addBack gaps) ends
  | otherwise -> endI * fid + sum (map (uncurry (*)) ends)
  where
    addBack
      | gapLen == 1 = id
      | otherwise = ((gapI + 1, gapLen - 1) :)

part1 :: IntMap Int -> IntMap (Int, Int) -> Int
part1 gaps files =
  fillGaps
    (IM.toList gaps)
    [ (i, fid)
    | (i0, (fid, len)) <- IM.toDescList dsFiles
    , i <- take len $ iterate (subtract 1) (i0 + len - 1)
    ]

For part 2, our queue consumption is pretty typical, with no re-push or short-circuiting. We just move through every single file in reverse once, so it can be captured as a mapAccumL: a stateful map over the backwards file blocks, where state is the empty slot candidates.

moveBlock :: IntMap Int -> (Int, (Int, Int)) -> (IntMap Int, Int)
moveBlock gaps (i, (fid, fileLen)) = (gaps', hereContrib)
  where
    foundGap = find ((>= fileLen) . snd) . IM.toAscList $ IM.takeWhileAntitone (< i) gaps
    hereContrib = fid * ((fileLen * (fileLen + 1)) `div` 2 + fileLen * (maybe i fst foundGap - 1))
    gaps' = case foundGap of
      Nothing -> gaps
      Just (gapI, gapLen) ->
        let addBack
              | gapLen > fileLen = IM.insert (gapI + fileLen) (gapLen - fileLen)
              | otherwise = id
         in addBack . IM.delete gapI $ gaps

part2 :: IntMap Int -> IntMap (Int, Int) -> Int
part2 gaps files = sum . snd . mapAccumL moveBlock gaps $ IM.toDescList files

Day 9 Benchmarks

>> Day 09a
benchmarking...
time                 6.924 ms   (6.781 ms .. 7.068 ms)
                     0.986 R²   (0.972 R² .. 0.997 R²)
mean                 7.307 ms   (7.129 ms .. 7.627 ms)
std dev              668.0 μs   (424.9 μs .. 912.4 μs)
variance introduced by outliers: 53% (severely inflated)

* parsing and formatting times excluded

>> Day 09b
benchmarking...
time                 16.25 ms   (16.15 ms .. 16.33 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 16.27 ms   (16.23 ms .. 16.31 ms)
std dev              96.57 μs   (73.62 μs .. 132.0 μs)

* parsing and formatting times excluded

Day 10

Top / Prompt / Code / Standalone

A lot of times in Haskell, two problems end up having the same algorithm, just with a different choice of Monoid. This puzzle is a good example of that.

We can do a simple DFS and collect all 9's into a monoid:

gatherNines :: Monoid m => (Point -> m) -> Map Point Int -> Point -> m
gatherNines f mp = go 0
  where
    go x p
      | x == 9 = f p
      | otherwise =
          foldMap (go (x+1)) . M.keys . M.filter (== (x+1)) $ mp `M.restrictKeys` neighbs
      where
        neighbs = S.fromList $ (p +) <$> [V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0]

For part 1 the monoid is Set Point (the unique 9's) and for part 2 the monoid is Sum Int (number of paths)

solve :: Monoid m => (Point -> m) -> (m -> Int) -> Map Point Int -> Int
solve gather observe mp =
    sum . map (observe . gatherNines gather mp) . M.keys $ M.filter (== 0) mp

part1 :: Map Point Int -> Int
part1 = solve S.singleton S.size

part2 :: Map Point Int -> Int
part2 = solve (const (Sum 1)) getSum

Day 10 Benchmarks

>> Day 10a
benchmarking...
time                 4.814 ms   (4.787 ms .. 4.843 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 4.824 ms   (4.801 ms .. 4.852 ms)
std dev              78.24 μs   (54.79 μs .. 116.7 μs)

* parsing and formatting times excluded

>> Day 10b
benchmarking...
time                 4.727 ms   (4.713 ms .. 4.753 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 4.736 ms   (4.726 ms .. 4.752 ms)
std dev              37.76 μs   (28.79 μs .. 49.63 μs)

* parsing and formatting times excluded

Day 11

Top / Prompt / Code / Standalone

Today's "one trick" seems to be realizing that the actual ordered "list" is a red herring: a number's progression doesn't depend on any of its neighbors or ordering. So what we really have is not a list, but a multi-set. Stepping the multiset through 75 iterations is very efficient --- shows you what you gain when you use the correct data structure to represent the state!

freqs :: [Int] -> IntMap Int
freqs = IM.fromListWith (+) . map (,1)

stepMap :: IntMap Int -> IntMap Int
stepMap mp = IM.unionsWith (+)
  [ (* n) <$> freqs (step x)
  | (x, n) <- IM.toList mp
  ]

step :: Int -> [Int]
step c
  | c == 0 = [1]
  | even pow = let (a, b) = c `divMod` (10 ^ (pow `div` 2)) in [a, b]
  | otherwise = [c * 2024]
  where
    pow = length . takeWhile (<= x) $ iterate (* 10) 1

part1 :: [Int] -> Int
part1 = sum . (!! 25) . iterate stepMap . freqs

part2 :: [Int] -> Int
part2 = sum . (!! 75) . iterate stepMap . freqs

My original reflections/write-up used data-memocombinators, but after some thought I believe that the frequency map approach is the most natural.

Day 11 Benchmarks

>> Day 11a
benchmarking...
time                 593.6 μs   (592.6 μs .. 594.4 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 593.3 μs   (592.4 μs .. 594.6 μs)
std dev              3.930 μs   (2.585 μs .. 5.749 μs)

* parsing and formatting times excluded

>> Day 11b
benchmarking...
time                 45.70 ms   (45.39 ms .. 46.05 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 44.66 ms   (44.05 ms .. 44.97 ms)
std dev              887.1 μs   (525.0 μs .. 1.377 ms)

* parsing and formatting times excluded

Day 12

Top / Prompt / Code / Standalone

First of all, let's assume we had a function that took a set and found all contiguous regions of that set:

contiguousRegions :: Set Point -> [Set Point]

Now we can take a Map Point a and then assume a map of a's to all of the contiuous regions:

regions :: Ord a => Map Point a -> Map a [Set Point]
regions mp =
  contiguousRegions
    <$> M.fromListWith (<>) [ (x, S.singleton p) | (p, x) <- M.toList mp ]

Now it helps to take a region and create four sets: the first, all of the region's external neighbors to the north, the second, all of the region's external enghbors to the west, then south, then east, etc.:

neighborsByDir :: Set Point -> [Set Point]
neighborsByDir pts = neighborsAt <$> [V2 0 1, V2 1 0, V2 0 (-1), V2 (-1) 0]
  where
    neighborsAt d = S.map (+ d) pts `S.difference` pts

Now part 1 basically is the size of all of those points, and part 2 is the number of contiguous regions of those points:

solve :: Ord a => (Set Point -> Int) -> Map Point a -> Int
solve countFences mp = sum
    [ S.size region * countFences dirRegion
    | letterRegions <- regions mp
    , region <- letterRegions
    , dirRegion <- neighborsByDir region
    ]

part1 :: Ord a => Map Point a -> Int
part1 = solve S.size

part2 :: Ord a => Map Point a -> Int
part2 = solve (length . contiguousRegions)

Okay I'll admit that I had contiguousRegions saved from multiple years of Advent of Code. The actual source isn't too pretty, but I'm including it here for completion's sake. In my actual code I use set and non-empty set instead of list and set.

-- | Find contiguous regions by cardinal neighbors
contiguousRegions :: Set Point -> Set (NESet Point)
contiguousRegions = startNewPool S.empty
  where
    startNewPool seenPools remaining = case S.minView remaining of
      Nothing -> seenPools
      Just (x, xs) ->
        let (newPool, remaining') = fillUp (NES.singleton x) S.empty xs
         in startNewPool (S.insert newPool seenPools) remaining'
    fillUp boundary internal remaining = case NES.nonEmptySet newBoundary of
      Nothing -> (newInternal, remaining)
      Just nb -> fillUp nb (NES.toSet newInternal) newRemaining
      where
        edgeCandidates = foldMap' cardinalNeighbsSet boundary `S.difference` internal
        newBoundary = edgeCandidates `S.intersection` remaining
        newInternal = NES.withNonEmpty id NES.union internal boundary
        newRemaining = remaining `S.difference` edgeCandidates

Day 12 Benchmarks

>> Day 12a
benchmarking...
time                 44.45 ms   (42.17 ms .. 49.85 ms)
                     0.972 R²   (0.929 R² .. 1.000 R²)
mean                 43.86 ms   (42.95 ms .. 47.64 ms)
std dev              2.986 ms   (792.5 μs .. 5.596 ms)
variance introduced by outliers: 20% (moderately inflated)

* parsing and formatting times excluded

>> Day 12b
benchmarking...
time                 42.87 ms   (42.16 ms .. 43.47 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 42.53 ms   (42.38 ms .. 42.77 ms)
std dev              363.0 μs   (176.7 μs .. 597.5 μs)

* parsing and formatting times excluded

Day 13

Top / Prompt / Code / Standalone

This one reduces to basically solving two linear equations, but it's kind of fun to see what the linear haskell library gives us to make things more convenient.

Basically for xa, ya, xb, yb, we want to solve the matrix equation M p = c for p, where c is our target <x, y>, and M is [ xa xb; ya yb ]. We're going to assume that our two buttons are linearly independent (they are not multiples of each other). Note that the M matrix is the transpose of the numbers as we originally parse them.

Normally we can solve this as p = M^-1 C, where M^-1 = [ yb -xb; -ya xa] / (ad - bc). However, we only care about integer solutions. This means that we can do some checks:

  1. Compute det = ad - bc and a matrix U = [yb -xb ; -ya xa], which is M^-1 * det.
  2. Compute p*det = U c
  3. Check that det is not 0
  4. Check that (`mod` det) is 0 for all items in U c
  5. Our result is then the (`div` det) for all items in U c.

linear has the det22 method for the determinant of a 2x2 matrix, but it doesn't quite have the M^-1 * det function, it only has M^-1 for Fractional instances. So we can write our own:

-- | Returns det(A) and inv(A)det(A)
inv22Int :: (Num a, Eq a) => M22 a -> Maybe (a, M22 a)
inv22Int m@(V2 (V2 a b) (V2 c d))
  | det == 0 = Nothing
  | otherwise = Just (det, V2 (V2 d (-b)) (V2 (-c) a))
  where
    det = det22 m

type Point = V2 Int

getPrize :: V2 Point -> Point -> Maybe Int
getPrize coeff targ = do
  (det, invTimesDet) <- inv22Int (transpose coeff)
  let resTimesDet = invTimesDet !* targ
      V2 a b = (`div` det) <$> resTimesDet
  guard $ all ((== 0) . (`mod` det)) resTimesDet
  pure $ 3 * a + b

part1 :: [(V2 Point, Point)] -> Int
part1 = sum . mapMaybe (uncurry getPrize)

part2 :: [(V2 Point, Point)] -> Int
part2 = part2 . map (second (10000000000000 +))

Here we take advantage of transpose, det22, !* for matrix-vector multiplication, the Functor instance of vectors for <$>, the Foldable instance of vectors for all, and the Num instance of vectors for numeric literals and +.

Day 13 Benchmarks

>> Day 13a
benchmarking...
time                 10.70 μs   (10.55 μs .. 10.96 μs)
                     0.993 R²   (0.987 R² .. 0.997 R²)
mean                 11.74 μs   (11.23 μs .. 12.34 μs)
std dev              1.963 μs   (1.546 μs .. 2.237 μs)
variance introduced by outliers: 95% (severely inflated)

* parsing and formatting times excluded

>> Day 13b
benchmarking...
time                 11.78 μs   (11.76 μs .. 11.80 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 11.79 μs   (11.77 μs .. 11.81 μs)
std dev              70.24 ns   (48.18 ns .. 88.93 ns)

* parsing and formatting times excluded

Day 14

Top / Prompt / Code / Standalone

Problems like this showcase the utility of using V2 from linear for keeping track of points. The "step" function ends up pretty clean:

type Point = V2 Int

step :: Point -> Point -> Point
step v x = mod <$> (x + v) <*> V2 101 103

Also, if we parse into [V2 Point] (a position and velocity paired up in a V2) we can use sequence to unzip our list into a V2 [Point] [Point], a list of positions and velocities. We can then use iterate and zipWith to step them:

part1 :: [V2 Point] -> Int
part2 pvs = score $ iterate (zipWith step vs) ps !! 100
  where
    V2 ps vs = sequence pvs
    score = product . M.fromListWith (+) . mapMaybe (\p -> (classify p, 1))
    quadrant p = mfilter (notElem EQ) $ Just (compare <$> p <*> V2 50 51)

quadrant here uses the Applicative instance and also the Foldable instance with notElem.

For my original solve of part 2, i stopped when I detected any large clusters. But, once we see that the actual input consists of vertical and horizontal lines, we can do a bit of optimizations. We know that the x positions have a period of 101, and so frames with vertical lines appear with period 101. We know that y positions have a period of 103 and so frames with horizontal lines appear with period 103. So, we can look at the first 101 frames and find any vertical lines, and then the first 103 frames and find any horizontal lines, and then do some math to figure out when the periodic appearances will line up.

maxMargin :: [[Int]] -> Int
maxMargin = fst . maximumBy (comparing (concentration . snd)) . zip [0..]
  where
    concentration = product . M.fromListWith (+) . map (,1)

part1 :: [V2 Point] -> Int
part2 pvs = (xi + ((yi - xi) * 5151)) `mod` 10403
  where
    V2 ps vs = sequence pvs
    steps = iterate (zipWith step vs) ps
    xi = maxMargin (view _x <$> take 101 steps)
    yi = maxMargin (view _y <$> take 103 steps)

Day 14 Benchmarks

>> Day 14a
benchmarking...
time                 1.251 ms   (1.246 ms .. 1.255 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 1.251 ms   (1.244 ms .. 1.261 ms)
std dev              26.18 μs   (21.12 μs .. 32.54 μs)

* parsing and formatting times excluded

>> Day 14b
benchmarking...
time                 14.04 ms   (13.92 ms .. 14.25 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 13.92 ms   (13.88 ms .. 14.02 ms)
std dev              148.2 μs   (69.32 μs .. 252.7 μs)

* parsing and formatting times excluded

Day 15

Top / Prompt / Code / Standalone

This is puzzle involves iteratively following "steps" and seeing how things change. If we store the world state polymorphically as a Map Point a, then we can write something generic to unite both parts.

Our polymorphic stepper will take a:

  1. Set Point of immovable walls
  2. A "glue" function Point -> Dir -> a -> [(Point, a)] which takes an a world entity and return any other entity it will be glued to.
  3. A starting state (Point, Map Point a), the player position and the position of the crates
  4. A Dir motion

and return the new updated (Point, Map Point a) state.

It will work by first trying to update the person state: if it moves into a crate, try to move the crate in the same direction, Point -> Map Point a -> a -> Maybe (Map Point a). This will then recursively try to move any crates along the way and any crates glued to it. The whole thing is wrapped up in a big Maybe monad, sequenced together with foldlM, so if anything fails, the whole thing fails. This is essentially a recursion-based DFS.

type Point = V2 Int
data Dir = North | East | South | West

moveByDir :: Point -> Dir -> Point
moveByDir p d = p + case d of
  North -> V2 0 1
  East -> V2 1 0
  South -> V2 0 (-1)
  West -> V2 (-1) 1

stepper ::
  forall a.
  (Point -> Dir -> a -> [(Point, a)]) ->
  Set Point ->
  (Point, Map Point a) ->
  Dir ->
  (Point, Map Point a)
stepper glue walls (person, crates) d
  | person' `S.member` walls = (person, crates)
  | otherwise = case M.lookup person' crates of
      Just lr -> maybe (person, crates) (person',) $ tryMove person' crates lr
      Nothing -> (person', crates)
  where
    person' = person `moveByDir` d
    tryMove :: Point -> Map Point a -> a -> Maybe (Map Point a)
    tryMove p crates' moved = do
      foldlM (\cs (p', moved') -> tryMoveSingle p' cs moved') crates' ((p, moved) : glue p d moved)
    tryMoveSingle :: Point -> Map Point a -> a -> Maybe (Map Point a)
    tryMoveSingle p crates' moved =
      commit
        <$> if p' `S.member` walls
          then Nothing
          else case M.lookup p' crates' of
            Just lr -> tryMove p' crates' lr
            Nothing -> Just crates'
      where
        p' = p `moveByDir` d
        commit = M.delete p . M.insert p' moved

Now to pick the glue and the a: for part 1, each crate contains no extra information, so a will be () and glue _ _ _ = [], no glue.

part1 :: Set Point -> Set Point -> Point -> [Dir] -> Set Point
part1 crates walls person =
    M.keys . snd . foldl' (stepper glue crates) (person, M.fromSet (const ()) walls)
  where
    glue _ _ _ = []

For part 2, each crate is either a [ or a ], left or right. So we can have the a be Bool, and the glue being the corresponding pair, but only if the motion direction is vertical.

part2 :: Set Point -> Map Point Bool -> Point -> [Dir] -> Set Point
part2 crates walls person =
    M.keys . snd . foldl' (stepper glue crates) (person, walls)
  where
    glue p d lr = [(bump lr p, not lr) | d `elem` [North, South]]
    bump = \case
      False -> (+ V2 1 0)
      True -> subtract (V2 1 0)

We can score our set of points:

score :: Set Point -> Int
score = sum . map (\(V2 x y) -> 100 * y + x) . toList 

Day 15 Benchmarks

>> Day 15a
benchmarking...
time                 2.817 ms   (2.795 ms .. 2.832 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 2.844 ms   (2.819 ms .. 2.912 ms)
std dev              142.4 μs   (45.19 μs .. 261.0 μs)
variance introduced by outliers: 32% (moderately inflated)

* parsing and formatting times excluded

>> Day 15b
benchmarking...
time                 3.903 ms   (3.894 ms .. 3.912 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 3.880 ms   (3.870 ms .. 3.890 ms)
std dev              31.83 μs   (26.58 μs .. 39.40 μs)

* parsing and formatting times excluded

Day 16

Top / Prompt / Code / Standalone

Nothing too deep I could think of for this one other than a specialized dijkstra BFS, that initially acts like normal dijkstra until the first successful path is found: after that, it treats that as the best cost, and it only re-adds points back to the queue if the cost is less than the known best cost.

data Path n p = Path {pCurr :: n, pSeen :: Set n, pCost :: p}
  deriving stock (Eq, Ord, Show)

allMinimalPaths ::
  forall n p.
  (Ord n, Ord p, Num p) =>
  -- | neighborhood
  (n -> Map n p) ->
  -- | start
  n ->
  -- | target
  (n -> Bool) ->
  -- | all paths with the shortest cost
  Maybe (p, [Set n])
allMinimalPaths expand start targ = go0 (M.singleton start path0) (M.singleton 0 (NESeq.singleton path0))
  where
    path0 = Path start S.empty 0
    go0 :: Map n (Path n p) -> Map p (NESeq (Path n p)) -> Maybe (p, [Set n])
    go0 bests queue = do
      ((p, Path{..} NESeq.:<|| xs), queue') <- M.minViewWithKey queue
      let queue'' = case NESeq.nonEmptySeq xs of
            Nothing -> queue'
            Just xs' -> M.insert p xs' queue'
      if targ pCurr
        then Just (p, pSeen : go1 p bests (M.takeWhileAntitone (<= p) queue''))
        else
          uncurry go0 . M.foldlWithKey' (processNeighbor pCost pSeen) (bests, queue'') $ expand pCurr
    go1 :: p -> Map n (Path n p) -> Map p (NESeq (Path n m p)) -> [Set n]
    go1 minCost bests queue = case M.minViewWithKey queue of
      Nothing -> []
      Just ((p, Path{..} NESeq.:<|| xs), queue') ->
        let queue'' = case NESeq.nonEmptySeq xs of
              Nothing -> queue'
              Just xs' -> M.insert p xs' queue'
         in if targ pCurr
              then pSeen : go1 minCost bests queue''
              else
                uncurry (go1 minCost)
                  . second (M.takeWhileAntitone (<= minCost))
                  . M.foldlWithKey' (processNeighbor pCost pSeen) (bests, queue'')
                  $ expand pCurr
    processNeighbor ::
      p ->
      Set n ->
      (Map n (Path n p), Map p (NESeq (Path n p))) ->
      n ->
      p ->
      (Map n (Path n p), Map p (NESeq (Path n p)))
    processNeighbor cost seen (bests, queue) x newCost
      | x `S.member` seen = (bests, queue)
      | otherwise = case M.lookup x bests of
          Nothing -> (M.insert x newPath bests, newQueue)
          Just Path{..}
            | cost + newCost <= pCost -> (M.insert x newPath bests, newQueue)
            | otherwise -> (bests, queue)
      where
        newPath = Path x (S.insert x seen) (cost + newCost)
        newQueue =
          M.insertWith
            (flip (<>))
            (cost + newCost)
            (NESeq.singleton newPath)
            queue

Then we can solve part 1 and part 2 with the same search:

type Point = V2 Int

type Dir = Finite 4

dirPoint :: Dir -> Point
dirPoint = SV.index $ SV.fromTuple (V2 0 (-1), V2 1 0, V2 0 1, V2 (-1) 0)

step :: Set Point -> (Point, Dir) -> Map (Point, Dir) Int
step walls (p, d) =
  M.fromList
    [ ((p, d'), 1000)
    | d' <- [d + 1, d - 1]
    , (p + dirPoint d') `S.notMember` walls
    ]
    <> if p' `S.member` walls
      then mempty
      else M.singleton (p', d) 1
  where
    p' = p + dirPoint d

solve :: Set Point -> Point -> Point -> Maybe (Int, [Set Point])
solve walls start end = 
  second (map (S.map fst)) <$> allMinimalPaths proj (step walls) (start, East) ((== end) . fst)

part1 :: Set Point -> Point -> Point -> Maybe Int
part1 walls start end = fst <$> solve walls start end

part2 :: Set Point -> Point -> Point -> Maybe Int
part2 walls start end = S.size mconcat . snd <$> solve walls start end

Right now we consider two nodes to be the same if they have the same position and the same direction, but there's a slight optimization we can do if we consider them to be the same if they are in the same position and on the same axis (going north/south vs going east/west) since it closes off paths that backtrack. However in practice this isn't really a big savings (5% for me).

Day 16 Benchmarks

>> Day 16a
benchmarking...
time                 314.6 ms   (288.1 ms .. 337.2 ms)
                     0.998 R²   (0.995 R² .. 1.000 R²)
mean                 322.6 ms   (315.8 ms .. 327.8 ms)
std dev              7.420 ms   (5.508 ms .. 9.232 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 16b
benchmarking...
time                 323.8 ms   (320.3 ms .. 330.2 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 323.5 ms   (319.1 ms .. 326.7 ms)
std dev              4.496 ms   (2.480 ms .. 6.795 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

Day 17

Top / Prompt / Code / Standalone

This one is a cute little interpreter problem, a staple of advent of code. Let's write Part 1 in a way that makes Part 2 easy, where we will have to eventually "run" it backwards. We can use Finite n as the type with n inhabitants, so Finite 8 will, for example, have the numbers 0 to 7. And also Vector n a from Data.Vector.Sized, which contains n items.

data Combo
  = CLiteral (Finite 4)
  | CReg (Finite 3)

data Instr
  = ADV Combo
  | BXL (Finite 8)
  | BST Combo
  | JNZ (Finite 4)
  | BXC
  | OUT Combo
  | BDV Combo
  | CDV Combo

We can then write a function to interpret the outputs into a monoid.

stepWith ::
  Monoid a =>
  Vector 8 Instr ->
  -- | out
  (Finite 8 -> a) ->
  -- | Starting a
  Word ->
  -- | Starting b
  Word ->
  -- | Starting c
  Word ->
  a
stepWith prog out = go 0
  where
    go i !a !b !c = case prog `SV.index` i of
      ADV r -> withStep go (a `div` (2 ^ combo r)) b c
      BXL l -> withStep go a (b `xor` fromIntegral l) c
      BST r -> withStep go a (combo r `mod` 8) c
      JNZ l
        | a == 0 -> withStep go 0 b c
        | otherwise -> go (weakenN l) a b c   -- weakenN :: Finite 4 -> Finite 8
      BXC -> withStep go a (b `xor` c) c
      OUT r ->
        let o = modulo (fromIntegral (combo r))
         in out o <> withStep go a b c
      BDV r -> withStep go a (a `div` (2 ^ combo r)) c
      CDV r -> withStep go a b (a `div` (2 ^ combo r))
      where
        combo = \case
          CLiteral l -> fromIntegral l
          CReg 0 -> a
          CReg 1 -> b
          CReg _ -> c
        withStep p
          | i == maxBound = \_ _ _ -> mempty
          | otherwise = p (i + 1)

Part 1 is a straightforward application, although we can use a difflist to get O(n) concats instead of O(n^2)

import Data.DList as DL

part1 :: Vector 8 Instr -> Word -> Word -> Word -> [Finite 8]
part1 prog a b c = DL.toList $ stepWith prog DL.singleton a b c

Part 2 it gets a bit interesting. We can solve it "in general" under the conditions:

  1. The final instruction is JNZ 0
  2. There is one OUT per loop, with a register
  3. b and c are overwritten at the start of each loop

The plan would be:

  1. Start from the end with a known a and move backwards, accumulating all possible values of a that would lead to the end value, ignoring b and c
  2. For each of those possible a's, start from the beginning with that a and filter the ones that don't produce the correct OUT.

We have to write a "step backwards" from scratch, but we can actually use our original stepWith to write a version that bails after the first output, by having our monoid be Data.Monoid.First. Then in the line out o <> withStep go a abc, it'll just completely ignore the right hand side and output the first OUT result.

searchStep :: Vector 8 Instr -> [Finite 8] -> [Word]
searchStep prog outs = do
  -- enforce the invariants
  JNZ 0 <- pure $ prog `SV.index` maxBound
  [CReg _] <- pure [r | OUT r <- toList prog]
  search 0 (reverse outs)
  where
    search a = \case
      o : os -> do
        a' <- stepBack a
        guard $ stepForward a' == Just o
        search a' os
      [] -> pure a
    -- doesn't enforce that b and c are reset, because i'm lazy
    stepForward :: Word -> Maybe (Finite 8)
    stepForward a0 = getFirst $ stepWith tp (First . Just) a0 0 0
    stepBack :: Word -> [Word]
    stepBack = go' maxBound
      where
        go' i a = case tp `SV.index` i of
          ADV r -> do
            a' <- case r of
              CLiteral l -> ((a `shift` fromIntegral l) +) <$> [0 .. 2 ^ getFinite l - 1]
              CReg _ -> []
            go' (pred i) a'
          OUT _ -> pure a
          _ -> go' (pred i) a

We really only have to handle the ADV r case because that's the only instruction that modifies A. If we ADV 3, that means that the possible "starting A's" are known_a * 8 + x, where x is between 0 and 7.

Wrapping it all up:

part2 :: Vector 8 Instr -> [Finite 8] -> Maybe Word
part2 instrs = listToMaybe . searchStep instrs

Day 17 Benchmarks

>> Day 17a
benchmarking...
time                 2.371 μs   (2.248 μs .. 2.531 μs)
                     0.981 R²   (0.962 R² .. 1.000 R²)
mean                 2.246 μs   (2.206 μs .. 2.338 μs)
std dev              211.0 ns   (82.53 ns .. 369.8 ns)
variance introduced by outliers: 87% (severely inflated)

* parsing and formatting times excluded

>> Day 17b
benchmarking...
time                 4.444 μs   (4.421 μs .. 4.463 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 4.441 μs   (4.427 μs .. 4.455 μs)
std dev              52.32 ns   (40.38 ns .. 70.82 ns)

* parsing and formatting times excluded

Day 18

Top / Prompt / Code / Standalone

Honestly there really isn't much to this puzzle other than applying a basic BFS to solve the maze. It isn't really even big enough that a-star would help.

If you parse the maze into an fgl graph, you can use something like sp :: Node -> Node -> gr a b -> Maybe Path to get the shortest path. However, because we're here anyway, I'm going to paste in my personal BFS code that I use for these challenges that I wrote a while ago, where neighborhoods are given by an n -> Set n function. It uses a Seq as its internal queue, which is my favorite queue type in Haskell.

data BFSState n = BS
  { _bsClosed :: !(Map n (Maybe n))
  -- ^ map of item to "parent"
  , _bsOpen :: !(Seq n)
  -- ^ queue
  }

bfs :: forall n. Ord n => (n -> Set n) -> n -> (n -> Bool) -> Maybe [n]
bfs ex x0 dest = reconstruct <$> go (addBack x0 Nothing (BS M.empty Seq.empty))
  where
    reconstruct :: (n, Map n (Maybe n)) -> [n]
    reconstruct (goal, mp) = drop 1 . reverse $ goreco goal
      where
        goreco n = n : maybe [] goreco (mp M.! n)
    go :: BFSState n -> Maybe (n, Map n (Maybe n))
    go BS{..} = case _bsOpen of
      Empty -> Nothing
      n :<| ns
        | dest n -> Just (n, _bsClosed)
        | otherwise -> go . S.foldl' (processNeighbor n) (BS _bsClosed ns) $ ex n
    addBack :: n -> Maybe n -> BFSState n -> BFSState n
    addBack x up BS{..} =
      BS
        { _bsClosed = M.insert x up _bsClosed
        , _bsOpen = _bsOpen :|> x
        }
    processNeighbor :: n -> BFSState n -> n -> BFSState n
    processNeighbor curr bs0@BS{..} neighb
      | neighb `M.member` _bsClosed = bs0
      | otherwise = addBack neighb (Just curr) bs0

type Point = V2 Int

cardinalNeighbsSet :: Point -> Set Point
cardinalNeighbsSet p = S.fromDistinctAscList . map (p +) $
    [ V2 (-1) 0 , V2 0 (-1) , V2 0 1 , V2 1 0 ]

solveMaze :: Set Point -> Maybe Int
solveMaze walls = length <$> bfs step 0 (== 70)
  where
    step p = S.filter (all (inRange (0, 70))) $ cardinalNeighbsSet p `S.difference` walls

Now if you have a list of points [Point], for part 1 you just solve the maze after taking the first 1024 of them:

part1 :: [Point] -> Maybe Int
part1 = solveMaze . S.fromList . take 1024

For part 2, you can search for the first success, or you can do a binary search.

-- | Find the lowest value where the predicate is satisfied within the
-- given bounds.
binaryMinSearch :: (Int -> Bool) -> Int -> Int -> Maybe Int
binaryMinSearch p = go
  where
    go !x !y
      | x == mid || y == mid = Just (x + 1)
      | p mid = go x mid
      | otherwise = go mid y
      where
        mid = ((y - x) `div` 2) + x
part2 :: [Point] -> Maybe Int
part2 pts = do
    j <- binaryMinSearch (isNothing . solveMaze . (!! wallList)) 0 (length pts)
    pure $ pts !! (j - 1)
  where
    wallList = scanl (flip S.insert) S.empty pts

You should probably use a container type with better indexing than a list, though.

Day 18 Benchmarks

>> Day 18a
benchmarking...
time                 6.592 ms   (6.559 ms .. 6.638 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 6.546 ms   (6.517 ms .. 6.582 ms)
std dev              105.4 μs   (86.20 μs .. 140.9 μs)

* parsing and formatting times excluded

>> Day 18b
benchmarking...
time                 13.78 ms   (13.73 ms .. 13.83 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 13.80 ms   (13.78 ms .. 13.84 ms)
std dev              77.43 μs   (59.00 μs .. 101.2 μs)

* parsing and formatting times excluded

Day 19

Top / Prompt / Code / Standalone

This one can be solved using an infinite trie --- we build up an infinite trie of possibilities using patterns, and then look up a given design by tearing down that trie. Written altogether that gives us a hylomorphism! I've written about using tries with recursion schemes in my blog, so this seemed like a natural extension.

data CharTrie a = CT {ctHere :: Maybe a, ctThere :: IntMap (CharTrie a)}
  deriving stock (Show, Functor, Traversable, Foldable)

makeBaseFunctor ''CharTrie

-- generates for us:
data CharTrieF a r = CTF {ctHereF :: Maybe a, ctThereF :: Map Char r}
  deriving stock (Show, Functor, Traversable, Foldable)

We can parameterize on a monoid a to solve both parts. For part 1, a is (): Just () means that the design is in the trie, and Nothing means it is not. For part 2, a is Sum Int: Just (Sum n) means there are n ways to get this design, and Nothing means the design is unreachable.

First, the lookup algebra, which is standard for tries:

lookupAlg :: CharTrieF a (String -> Maybe a) -> String -> Maybe a
lookupAlg CTF{..} = \case
  [] -> ctHereF
  c : cs -> ($ cs) =<< M.lookup c ctThereF

If we had a CharTrie a, then cata lookupAlg myTree "hello" would look up "hello" in the trie.

The buildup co-algebra is an interesting one. We will convert a Map String a into a CharTrie a, but, every time we reach the end of the string, we "restart" the building from the start, while merging all of the resulting leaves monoidally. So, we'll take a Set String as well, which we will trigger when we hit the end of a pattern.

fromMapCoalg ::
  forall a.
  (Semigroup a) =>
  Set String ->
  Map String a ->
  CharTrieF a (Map String a)
fromMapCoalg mp0 = \ks ->
  let x = M.lookup [] ks
      reAdd = case x of
        Nothing -> id
        Just y -> M.unionWith (M.unionWith (<>)) (M.fromSet (const y) <$> initialSplit)
   in CTF x $ reAdd (splitTrie ks)
  where
    initialSplit :: Map Char (Set String)
    initialSplit = M.fromAscListWith (<>) [ (k, S.singleton ks) | k : ks <- toList mp0 ]
    splitTrie :: Map String a -> Map Char (Map String a)
    splitTrie mp = M.fromAscListWith (<>) [ (k, M.singleton ks x) | (k : ks, x) <- M.toList mp ]

And that's it! Our hylomorphism will build up the infinite trie, but only the specific branch that we end up looking up from it. Because it's a hylomorphism, we never actually generate any trie structure: we basically build up only the branch we care about (driven by the lookup) and stop when we finish looking up or hit a dead end.

buildable :: (Semigroup a) => a -> Set String -> String -> Maybe a
buildable x mp = hylo lookupAlg (fromMapCoalg mp) (M.fromSet (const x) mp)

part1 :: Set String -> [String] -> Int
part1 pats = length . mapMaybe (buildable () pats)

part2 :: Set String -> [String] -> Int
part2 pats = getSum . foldMap (fold . buildable (Sum 1) pats)

However, this may be a case where the hylomorphism is slower than doing the unfold and fold separately, because the full CharTrie is actually going to be re-used multiple times for each design. However, there's something a little more satisfying about just re-building and tearing down every time.

Day 19 Benchmarks

>> Day 19a
benchmarking...
time                 274.5 ms   (238.8 ms .. 319.5 ms)
                     0.990 R²   (0.965 R² .. 1.000 R²)
mean                 291.0 ms   (280.4 ms .. 304.0 ms)
std dev              15.07 ms   (8.887 ms .. 20.78 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 19b
benchmarking...
time                 263.2 ms   (252.7 ms .. 273.1 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 274.3 ms   (268.5 ms .. 285.3 ms)
std dev              10.64 ms   (1.505 ms .. 14.39 ms)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

Day 20

Top / Prompt / Code / Standalone

Because this is a "race track" with no branching, finding the path to the end can be a straightforward DFS with no-takebacksies:

cardinalNeighbsSet :: Point -> Set Point
cardinalNeighbsSet p = S.fromDistinctAscList . map (p +) $
    [ V2 (-1) 0 , V2 0 (-1) , V2 0 1 , V2 1 0 ]

racePath :: Set Point -> Point -> Point -> Maybe [Point]
racePath walls start end = go Nothing start
  where
    go :: Maybe Point -> Point -> Maybe [Point]
    go prev here = do
      next <- S.lookupMin candidates
      (here :)
        <$> if next == end
          then pure [end]
          else go (Just here) next
      where
        candidates = maybe id S.delete prev $ cardinalNeighbsSet here `S.difference` walls

Since our racepath is one continuous line, a cheat therefore involves "pinching" the line so that you skip straight over one segment of the line. So, we can basically iterate over each point in the line and imagine if we jumped ahead N spaces. If the time saved by jumping N spaces minus the real-world distance is greater than the threshold, it's a legitimate cheat.

mannDist :: Point -> Point
mannDist x y = sum (abs (x - y))

mannNorm :: Point -> Int
mannNorm = mannDist 0

findCheats :: Set Point -> Point -> Point -> Int -> Int -> Maybe Int
findCheats walls start end len thresh = do
  path <- racePath walls start end
  pure . sum . snd $ mapAccumR go (0, M.empty) path
  where
    go :: (Int, Map Point Int) -> Point -> ((Int, Map Point Int), Int)
    go (i, xs) x =
      ( (i + 1, M.insert x i xs)
      , M.size $
          M.filterWithKey (\y j -> let d = mannDist x y in d <= len && i - j - d >= thresh) xs
      )

Our mapAccumR here iterates from the end of the list with the index (i) and a map xs of points to the index where that point is on the racetrack. At each point, we output the number of cheats: it's the xs filtered by points legally jumpable within a given distance, and then further filtered where the jump in index i - j minus the time to travel mannDist x y is greater than the threshold for counting the cheat. In the end we sum all of those outputs.

Day 20 Benchmarks

>> Day 20a
benchmarking...
time                 34.12 ms   (32.70 ms .. 35.54 ms)
                     0.994 R²   (0.983 R² .. 1.000 R²)
mean                 34.98 ms   (34.32 ms .. 36.48 ms)
std dev              1.880 ms   (715.8 μs .. 3.355 ms)
variance introduced by outliers: 18% (moderately inflated)

* parsing and formatting times excluded

>> Day 20b
benchmarking...
time                 405.5 ms   (393.5 ms .. 431.8 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 393.5 ms   (390.1 ms .. 399.7 ms)
std dev              5.956 ms   (1.238 ms .. 7.831 ms)
variance introduced by outliers: 19% (moderately inflated)

* parsing and formatting times excluded

Day 21

Top / Prompt / Code / Standalone

Everything reveals itself if we imagine a lookup table of "best path from A to B". For my own purposes I've made the functions parameterized by button pad, using Maybe a, where Nothing is the A key and Just x is the x key.

type LookupTable a b = Map (Maybe a) (Map (Maybe a) [Maybe b])

type LookupTableLengths a = Map (Maybe a) (Map (Maybe a) Int)

toLengths :: LookupTable a b -> LookupTableLengths a
toLengths = fmap (fmap length)

The key is that now these maps are composable:

spellDirPathLengths :: Ord a => LookupTableLengths a -> [Maybe a] -> Int
spellDirPathLengths mp xs = sum $ zipWith (\x y -> (mp M.! x) M.! y) xs (drop 1 xs)

composeDirPathLengths :: Ord b => LookupTableLengths b -> LookupTable a b -> LookupTableLengths a
composeDirPathLengths mp = (fmap . fmap) (spellDirPathLengths mp . (Nothing :))

That is, if you have the lookup table for two layers, you can compose them to create one big lookup table.

data Dir = North | East | West | South
data NumButton = Finite 10

dirPathChain :: [LookupTableLengths NumButton]
dirPathChain = iterate (`composeDirPathLengths` dirPath @Dir) (dirPathCosts @Dir)

solveCode :: Int -> [Maybe NumButton] -> Int
solveCode n = spellDirPathLengths mp . (Nothing :)
  where
    lengthChain = dirPathChain !! (n - 1)
    mp = lengthChain `composeDirPathLengths` dirPath @NumButton

The nice thing is that you only need to compute dirPathChain once, to get the final LookupTableLengths for a given n, and you can re-use it for everything.

Generating the actual LookupTable NumButton Dir and LookupTable Dir Dir is the tricky part. For me I generated it based on the shortest path considering the third bot up the chain from the bottom: I used an fgl graph where the nodes were the state of three bots and the edges were the actions that the fourth "controller" would take, and computed the shortest path in terms of the fourth controller. This seems to be the magic number: anything higher and you get the same answer, anything lower and you get suboptimal final paths.

Day 21 Benchmarks

>> Day 21a
benchmarking...
time                 3.840 μs   (3.834 μs .. 3.851 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 3.883 μs   (3.848 μs .. 4.052 μs)
std dev              222.9 ns   (19.15 ns .. 512.1 ns)
variance introduced by outliers: 69% (severely inflated)

* parsing and formatting times excluded

>> Day 21b
benchmarking...
time                 3.839 μs   (3.831 μs .. 3.849 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 3.841 μs   (3.835 μs .. 3.845 μs)
std dev              16.92 ns   (13.66 ns .. 20.87 ns)

* parsing and formatting times excluded

Day 22

Top / Prompt / Code / Standalone

First let's set up the RNG step:

step :: Int -> Int
step = prune . phase3 . prune . phase2 . prune . phase1
  where
    phase1 n = (n `shift` 6) `xor` n
    phase2 n = (n `shift` (-5)) `xor` n
    phase3 n = (n `shift` 11) `xor` n
    prune = (.&. 16777215)

Part 1 is just running and summing:

part1 :: [Int] -> Int
part1 = sum . map ((!! 2000) . iterate)

Part 2 is a little more interesting. We want to make a map of 4-sequences to the first price they would get. On a chain of iterations, we can iteratively chomp on runs of 4:

chompChomp :: [Int] -> [([Int], Int)]
chompChomp (a : b : c : d : e : fs) =
    ([da, db, dc, dd], e) : chompChomp (b : c : d : e : fs)
  where
    da = b - a
    db = c - b
    dc = d - c
    dd = e - d
chompChomp _ = []

priceForChain :: Int -> Map [Int] Int
priceForChain = M.fromListWith (const id) . chompChomp . take 2000 . map (`mod` 10) . iterate step

Then we can sum all of the sequence prices and get the maximum:

part2 :: [Int] -> Int
part2 = maximum . M.elems . M.fromListWith (+) . map priceForChain

I'm not super happy with the fact that this takes 3 seconds (even after optimizing to using IntMap on a base-19 encoding of the sequence). Switching to a single mutable vector doing all of the summing (and a mutable vector for every seed preventing double-adds) we bring it down to 800ms which still isn't particularly ideal.

Day 22 Benchmarks

>> Day 22a
benchmarking...
time                 30.33 ms   (29.99 ms .. 30.67 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 30.51 ms   (30.30 ms .. 30.90 ms)
std dev              550.9 μs   (203.2 μs .. 935.3 μs)

* parsing and formatting times excluded

>> Day 22b
benchmarking...
time                 776.3 ms   (767.0 ms .. 784.6 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 780.4 ms   (778.0 ms .. 782.1 ms)
std dev              2.663 ms   (2.009 ms .. 3.144 ms)
variance introduced by outliers: 19% (moderately inflated)

* parsing and formatting times excluded

Day 23

Top / Prompt / Code / Standalone

This one end up being a nice hylomorphism.

We can build the upper triangle of the adjacency map: only include edges from items to items later in the alphabet.

connMap :: Ord a => [(a, a)] -> Map a (Set a)
connMap xs =
  M.unionsWith
    (<>)
    [ M.fromList [(a, S.singleton b), (a, S.empty)]
    | [a, b] <- xs <&> \(x, y) -> sort [x, y]
    ]

Part 1 we can manually unroll:

part1 :: Map a (Set a) -> Int
part1 conns = length do
    (a, adjA) <- M.toList conns
    b <- toList adjA
    c <- toList $ (conns M.! b) `S.intersection` adjA
    guard $ any ("t" `isPrefixOf`) [a, b, c]

This is using the list monad's non-determinism for a depth first search: For every item a, all of the items b in its adjacencies are valid in its triple. From there we can add any item c in the adjacencies of b, provided c is also in fromA, the adjacencies from as.

Part 2 is where things get fun. One way to look at it is, from each starting point, build a tree of all adjacency hops from it at are valid: each next child they must be reachable from all of its parents. Then, collapse all branching paths from top to bottom.

Therefore, our base functor is a list of parents to children:

newtype Branch a = Branch { unBranch :: [(String, a)] }
  deriving Functor

And now we are in good shape to write our hylomorphism:

allCliques :: Ord a => Map a (Set a) -> [[a]]
allCliques conns = hylo tearDown build (M.toList conns)
  where
    build = Branch
          . map (\(a, cands) -> (a, [(b, cands `S.intersection` (conns M.! b)) | b <- toList cands]))
    tearDown = foldMap (\(here, there) -> (here :) <$> if null there then pure [] else there)
             . unBranch

part2 :: Map a (Set a) -> [a]
part2 = maximumBy (comparing length) . allCliques

Day 23 Benchmarks

>> Day 23a
benchmarking...
time                 3.750 ms   (3.729 ms .. 3.780 ms)
                     0.998 R²   (0.995 R² .. 1.000 R²)
mean                 3.789 ms   (3.762 ms .. 3.836 ms)
std dev              127.3 μs   (65.75 μs .. 221.4 μs)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 23b
benchmarking...
time                 48.57 ms   (48.41 ms .. 48.73 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 48.66 ms   (48.57 ms .. 48.78 ms)
std dev              200.5 μs   (137.2 μs .. 264.3 μs)

* parsing and formatting times excluded

Day 24

Top / Prompt / Code / Standalone

Let's make a nice flexible Gate Functor/Traversable that will guide us along our journey.

data Op = OAnd | OOr | OXor
  deriving stock (Eq, Ord, Show, Generic)

data Gate a = Gate {gOp :: Op, gX :: a, gY :: a}
  deriving stock (Show, Generic, Functor, Traversable, Foldable)

applyGate :: Gate Bool -> Bool
applyGate Gate{..} = case gOp of
  OAnd -> gX && gY
  OOr -> gX || gY
  OXor -> gX /= gY

Part 1 we can use the typical knot-tying trick: from a Map String (Gate String), generate a Map String Bool of labels to their results, by referring to that same result. We use the Functor instance of Gate to get fmap (M.! result) :: Gate String -> Gate Bool.

part1 :: Map String Bool -> Map String (Gate String) -> Int
part1 inputs gates = sum [ 2 ^ read n | ('z':n, True) <- M.toList result ]
  where
    result :: Map String Bool
    result = inputs <> fmap (applyGate . fmap (M.! result)) gates

Now part 2, the fun part. One thing we can do is generate a full adder, by creating a tree of Gates. We can use Free to create a tree of nested Gates, since Free f a = Pure a | Free (f (Free f a)).

type GateTree = Free Gate

halfAdder :: GateTree a -> GateTree a -> (GateTree a, GateTree a)
halfAdder x y = (wrap $ Gate OAnd x y, wrap $ Gate OXor x y)

-- | returns carry bit and output bit
fullAdder :: GateTree a -> GateTree a -> GateTree a -> (GateTree a, GateTree a)
fullAdder x y carry0 = (wrap $ Gate OOr carry1 carry2, o)
  where
    (carry1, z) = halfAdder x y
    (carry2, o) = halfAdder z carry0

-- | returns final carry bit and all n output bits
adderTree :: Int -> (GateTree String, NonEmpty (GateTree String))
adderTree n
  | n == 0 = (:| []) `second` halfAdder (pure "x00") (pure "y00")
  | otherwise =
      let (carryIn, rest) = adderTree (n - 1)
          (carryOut, new) = fullAdder (pure (printf "x%02d" n)) (pure (printf "y%02d" n)) carryIn
       in (carryOut, new `NE.cons` rest)

Now for the magic of Free: We can collapse it all into a flattened free structure using iterA, which "folds" each layer of the free structure. We built up a map of known gates and assign unknown gates to a new unique ID, creating a Map (Gate (Either Int String)) Int. Left means that the gate points to a known Int id and Right means it was an input xNN/yNN variable.

unrollGates ::
  forall a. Ord a => GateTree a -> State (Int, Map (Gate (Either Int a)) Int) (Either Int a)
unrollGates = iterA go . fmap Right
  where
    go g0 = do
      gate <- sequenceA g0
      (currIx, currMp) <- get
      case M.lookup gate currMp of
        Nothing -> do
          put (currIx + 1, M.insert gate currIx currMp)
          pure $ Left currIx
        Just i -> pure $ Left i

unrollAdderTree :: Int -> ([Int], IntMap (Gate (Either Int String)))
unrollAdderTree n = (lefts $ toList outs, IM.fromList $ swap <$> M.toList mp)
  where
    (carry, adder) = adderTree n
    full = NE.reverse $ carry `NE.cons` adder
    (outs, (_, mp)) = runState (traverse unrollGates full) (0, M.empty)

We wrapped it all up with unrollAdderTree, which returns the map of gate Int id's and also all of the top-level output id's. This works because all of the adders in carry/adder/full are the top-level outputs, so traverse pulls out those Ints as its final result.

Finally we can wrap it all up in the list monad for a search. The whole thing is composing NameState -> [NameState] branches using >=>, where dead-ends are indicated by an empty list returned.

data NameState = NS
  { nsRenames :: Map String String
  , nsNames :: IntMap String
  , nsFound :: Bool
  }

nameGate :: Map (Gate String) String -> Int -> Gate (Either Int String) -> NameState -> [NameState]
nameGate avail ng g0 NS{..} =
  case applySwaps nsRenames <$> M.lookup gate avail of
    Nothing -> []
    Just here ->
      -- the all-goes-well branch
      NS{nsNames = IM.insert ng here nsNames, ..}
      -- all possible substitutions/switches
        : [ NS renames (IM.insert ng there nsNames) True
          | not nsFound
          , there <- toList avail
          , here /= there
          , let renames = M.fromList [(here, there), (there, here)] <> nsRenames
          ]
  where
    gate = either (nsNames IM.!) id <$> g0
    applySwaps mp x = M.findWithDefault x x mp

nameTree :: Map (Gate String) String -> [Map String String]
nameTree avail = nsRenames <$> foldr (\o -> (go o >=>)) pure outGates s0
  where
    s0 = NS M.empty IM.empty False
    (outGates, gates) = unrollAdderTree 44
    go outGate ns0
      | M.size (nsRenames ns0) == 8 = [ns0]
      | otherwise =
          IM.foldrWithKey
            (\k g -> (nameGate avail k g >=>))
            pure
            (IM.takeWhileAntitone (<= outGate) gates)
            (ns0{nsFound = False})

The search is meant layer-by-layer: do all of the z00 inputs first, then the z01 inputs, etc. There is also a major optimization that makes this all feasible: we only expect one swap per layer.

Anyway that's it:

part2 :: Map (Gate String) String -> [String]
part2 = fmap M.keys . listToMaybe . nameTree

Day 24 Benchmarks

>> Day 24a
benchmarking...
time                 104.0 μs   (103.9 μs .. 104.2 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 104.9 μs   (104.5 μs .. 106.1 μs)
std dev              2.157 μs   (1.014 μs .. 4.014 μs)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 24b
benchmarking...
time                 1.569 ms   (1.554 ms .. 1.604 ms)
                     0.997 R²   (0.991 R² .. 1.000 R²)
mean                 1.550 ms   (1.542 ms .. 1.572 ms)
std dev              46.62 μs   (11.57 μs .. 87.93 μs)
variance introduced by outliers: 17% (moderately inflated)

* parsing and formatting times excluded

Day 25

Top / Prompt / Code / Standalone

As usual, a nice relaxing day to celebrate Christmas :)

Assuming we have a list of keys and locks interspersed, as [Set (Int, Int)], we can marginalize to get the x-wise histograms and y-wise histograms:

marginX :: Set (Int, Int) -> Map Int Int
marginX = M.fromListWith (+) . map (\(x, y) -> (x, 1)) . toList

marginY :: Set (Int, Int) -> Map Int Int
marginY = M.fromListWith (+) . map (\(x, y) -> (y, 1)) . toList

We can distinguish keys from locks by checking if y=0 has all 5 points filled:

isLock :: Set (Int, Int) -> Bool
isLock = (== 5) . M.findWithDefault 0 0 . marginY

We can check if a pair is valid by checking that none of their x margins add up to greater than 7. Wrapping it all in the list monad's cartesian product and we get:

day25 :: [Set (Int, Int)] -> Int
day25 = uncurry countCombos . partition isLock
  where
    countCombos locks keys = length do
      lock <- marginX <$> locks
      key <- marginX <$> keys
      guard $ all (< 8) (M.unionWith (+) lock key)

Day 25 Benchmarks

>> Day 25a
benchmarking...
time                 6.789 ms   (6.668 ms .. 6.890 ms)
                     0.990 R²   (0.973 R² .. 1.000 R²)
mean                 6.953 ms   (6.849 ms .. 7.262 ms)
std dev              552.1 μs   (61.43 μs .. 1.049 ms)
variance introduced by outliers: 46% (moderately inflated)

* parsing and formatting times excluded