Skip to content
Snippets Groups Projects

Merge in latest aptly changes and add split reflists / other perf fixes

  1. Feb 12, 2024
    • Ryan Gonzalez's avatar
      Split reflists to share their contents across snapshots · 353a6374
      Ryan Gonzalez authored and Emanuele Aina's avatar Emanuele Aina committed
      
      In current aptly, each repository and snapshot has its own reflist in
      the database. This brings a few problems with it:
      
      - Given a sufficiently large repositories and snapshots, these lists can
        get enormous, reaching >1MB. This is a problem for LevelDB's overall
        performance, as it tends to prefer values around the confiruged block
        size (defaults to just 4KiB).
      - When you take these large repositories and snapshot them, you have a
        full, new copy of the reflist, even if only a few packages changed.
        This means that having a lot of snapshots with a few changes causes
        the database to basically be full of largely duplicate reflists.
      - All the duplication also means that many of the same refs are being
        loaded repeatedly, which can cause some slowdown but, more notably,
        eats up huge amounts of memory.
      - Adding on more and more new repositories and snapshots will cause the
        time and memory spent on things like cleanup and publishing to grow
        roughly linearly.
      
      At the core, there are two problems here:
      
      - Reflists get very big because there are just a lot of packages.
      - Different reflists can tend to duplicate much of the same contents.
      
      *Split reflists* aim at solving this by separating reflists into 64
      *buckets*. Package refs are sorted into individual buckets according to
      the following system:
      
      - Take the first 3 letters of the package name, after dropping a `lib`
        prefix. (Using only the first 3 letters will cause packages with
        similar prefixes to end up in the same bucket, under the assumption
        that packages with similar names tend to be updated together.)
      - Take the 64-bit xxhash of these letters. (xxhash was chosen because it
        relatively good distribution across the individual bits, which is
        important for the next step.)
      - Use the first 6 bits of the hash (range [0:63]) as an index into the
        buckets.
      
      Once refs are placed in buckets, a sha256 digest of all the refs in the
      bucket is taken. These buckets are then stored in the database, split
      into roughly block-sized segments, and all the repositories and
      snapshots simply store an array of bucket digests.
      
      This approach means that *repositories and snapshots can share their
      reflist buckets*. If a snapshot is taken of a repository, it will have
      the same contents, so its split reflist will point to the same buckets
      as the base repository, and only one copy of each bucket is stored in
      the database. When some packages in the repository change, only the
      buckets containing those packages will be modified; all the other
      buckets will remain unchanged, and thus their contents will still be
      shared. Later on, when these reflists are loaded, each bucket is only
      loaded once, short-cutting loaded many megabytes of data. In effect,
      split reflists are essentially copy-on-write, with only the changed
      buckets stored individually.
      
      Changing the disk format means that a migration needs to take place, so
      that task is moved into the database cleanup step, which will migrate
      reflists over to split reflists, as well as delete any unused reflist
      buckets.
      
      All the reflist tests are also changed to additionally test out split
      reflists; although the internal logic is all shared (since buckets are,
      themselves, just normal reflists), some special additions are needed to
      have native versions of the various reflist helper methods.
      
      In our tests, we've observed the following improvements:
      
      - Memory usage during publish and database cleanup, with
        `GOMEMLIMIT=2GiB`, goes down from ~3.2GiB (larger than the memory
        limit!) to ~0.7GiB, a decrease of ~4.5x.
      - Database size decreases from 1.3GB to 367MB.
      
      *In my local tests*, publish times had also decreased down to mere
      seconds but the same effect wasn't observed on the server, with the
      times staying around the same. My suspicions are that this is due to I/O
      performance: my local system is an M1 MBP, which almost certainly has
      much faster disk speeds than our DigitalOcean block volumes. Split
      reflists include a side effect of requiring more random accesses from
      reading all the buckets by their keys, so if your random I/O
      performance is slower, it might cancel out the benefits. That being
      said, even in that case, the memory usage and database size advantages
      still persist.
      
      Signed-off-by: default avatarRyan Gonzalez <ryan.gonzalez@collabora.com>
      353a6374
    • Ryan Gonzalez's avatar
      Fix reflist diffs failing to compact when one of the inputs ends · f52c4f38
      Ryan Gonzalez authored and Emanuele Aina's avatar Emanuele Aina committed
      
      The previous reflist logic would early-exit the loop body if one of the
      lists was empty, but that skips the compacting logic entirely.
      
      Instead of doing the early-exit, we can leave a list's ref as nil when
      the list end is reached and then flip the comparison result, which will
      essentially treat it as being greater than all others. This should
      preserve the general behavior without omitting the compaction.
      
      Signed-off-by: default avatarRyan Gonzalez <ryan.gonzalez@collabora.com>
      f52c4f38
    • Ryan Gonzalez's avatar
      Skip loading reflists when listing published repos · e84b0a39
      Ryan Gonzalez authored and Emanuele Aina's avatar Emanuele Aina committed
      
      The output doesn't actually depend on the reflists, and loading them for
      every published repo starts to take substantial time and memory.
      
      Signed-off-by: default avatarRyan Gonzalez <ryan.gonzalez@collabora.com>
      e84b0a39
    • Ryan Gonzalez's avatar
      docker: Switch from building on Debian to using the official Go image · 2bf1cbfa
      Ryan Gonzalez authored and Emanuele Aina's avatar Emanuele Aina committed
      
      Getting Go 1.21 (required by newer aptly) on bookworm requires utilizing
      the backports repository; it's easier to just rely on the official
      images instead.
      
      Signed-off-by: default avatarRyan Gonzalez <ryan.gonzalez@collabora.com>
      2bf1cbfa
Loading