1. 08 Apr, 2020 1 commit
    • Filipe Manana's avatar
      btrfs: fix reclaim counter leak of space_info objects · d611add4
      Filipe Manana authored
      Whenever we add a ticket to a space_info object we increment the object's
      reclaim_size counter witht the ticket's bytes, and we decrement it with
      the corresponding amount only when we are able to grant the requested
      space to the ticket. When we are not able to grant the space to a ticket,
      or when the ticket is removed due to a signal (e.g. an application has
      received sigterm from the terminal) we never decrement the counter with
      the corresponding bytes from the ticket. This leak can result in the
      space reclaim code to later do much more work than necessary. So fix it
      by decrementing the counter when those two cases happen as well.
      
      Fixes: db161806
      
       ("btrfs: account ticket size at add/delete time")
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d611add4
  2. 23 Mar, 2020 4 commits
  3. 20 Mar, 2020 1 commit
  4. 31 Jan, 2020 2 commits
    • Josef Bacik's avatar
      btrfs: take overcommit into account in inc_block_group_ro · a30a3d20
      Josef Bacik authored
      
      
      inc_block_group_ro does a calculation to see if we have enough room left
      over if we mark this block group as read only in order to see if it's ok
      to mark the block group as read only.
      
      The problem is this calculation _only_ works for data, where our used is
      always less than our total.  For metadata we will overcommit, so this
      will almost always fail for metadata.
      
      Fix this by exporting btrfs_can_overcommit, and then see if we have
      enough space to remove the remaining free space in the block group we
      are trying to mark read only.  If we do then we can mark this block
      group as read only.
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a30a3d20
    • Josef Bacik's avatar
      btrfs: fix force usage in inc_block_group_ro · a7a63acc
      Josef Bacik authored
      
      
      For some reason we've translated the do_chunk_alloc that goes into
      btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
      two different things.
      
      force for inc_block_group_ro is used when we are forcing the block group
      read only no matter what, for example when the underlying chunk is
      marked read only.  We need to not do the space check here as this block
      group needs to be read only.
      
      btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
      we need to pre-allocate a chunk before marking the block group read
      only.  This has nothing to do with forcing, and in fact we _always_ want
      to do the space check in this case, so unconditionally pass false for
      force in this case.
      
      Then fixup inc_block_group_ro to honor force as it's expected and
      documented to do.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a7a63acc
  5. 23 Jan, 2020 2 commits
  6. 20 Jan, 2020 5 commits
    • Johannes Thumshirn's avatar
      btrfs: remove unnecessary wrapper get_alloc_profile · ef0a82da
      Johannes Thumshirn authored
      
      
      btrfs_get_alloc_profile() is a simple wrapper over get_alloc_profile().
      The only difference is btrfs_get_alloc_profile() is visible to other
      functions in btrfs while get_alloc_profile() is static and thus only
      visible to functions in block-group.c.
      
      Let's just fold get_alloc_profile() into btrfs_get_alloc_profile() to
      get rid of the unnecessary second function.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarJohannes Thumshirn <jth@kernel.org>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ef0a82da
    • Dennis Zhou's avatar
      btrfs: handle empty block_group removal for async discard · 6e80d4f8
      Dennis Zhou authored
      
      
      block_group removal is a little tricky. It can race with the extent
      allocator, the cleaner thread, and balancing. The current path is for a
      block_group to be added to the unused_bgs list. Then, when the cleaner
      thread comes around, it starts a transaction and then proceeds with
      removing the block_group. Extents that are pinned are subsequently
      removed from the pinned trees and then eventually a discard is issued
      for the entire block_group.
      
      Async discard introduces another player into the game, the discard
      workqueue. While it has none of the racing issues, the new problem is
      ensuring we don't leave free space untrimmed prior to forgetting the
      block_group.  This is handled by placing fully free block_groups on a
      separate discard queue. This is necessary to maintain discarding order
      as in the future we will slowly trim even fully free block_groups. The
      ordering helps us make progress on the same block_group rather than say
      the last fully freed block_group or needing to search through the fully
      freed block groups at the beginning of a list and insert after.
      
      The new order of events is a fully freed block group gets placed on the
      unused discard queue first. Once it's processed, it will be placed on
      the unusued_bgs list and then the original sequence of events will
      happen, just without the final whole block_group discard.
      
      The mount flags can change when processing unused_bgs, so when flipping
      from DISCARD to DISCARD_ASYNC, the unused_bgs must be punted to the
      discard_list to be trimmed. If we flip off DISCARD_ASYNC, we punt
      free block groups on the discard_list to the unused_bg queue which will
      do the final discard for us.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDennis Zhou <dennis@kernel.org>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6e80d4f8
    • Dennis Zhou's avatar
      btrfs: add the beginning of async discard, discard workqueue · b0643e59
      Dennis Zhou authored
      
      
      When discard is enabled, everytime a pinned extent is released back to
      the block_group's free space cache, a discard is issued for the extent.
      This is an overeager approach when it comes to discarding and helping
      the SSD maintain enough free space to prevent severe garbage collection
      situations.
      
      This adds the beginning of async discard. Instead of issuing a discard
      prior to returning it to the free space, it is just marked as untrimmed.
      The block_group is then added to a LRU which then feeds into a workqueue
      to issue discards at a much slower rate. Full discarding of unused block
      groups is still done and will be addressed in a future patch of the
      series.
      
      For now, we don't persist the discard state of extents and bitmaps.
      Therefore, our failure recovery mode will be to consider extents
      untrimmed. This lets us handle failure and unmounting as one in the
      same.
      
      On a number of Facebook webservers, I collected data every minute
      accounting the time we spent in btrfs_finish_extent_commit() (col. 1)
      and in btrfs_commit_transaction() (col. 2). btrfs_finish_extent_commit()
      is where we discard extents synchronously before returning them to the
      free space cache.
      
      discard=sync:
                       p99 total per minute       p99 total per minute
            Drive   |   extent_commit() (ms)  |    commit_trans() (ms)
          ---------------------------------------------------------------
           Drive A  |           434           |          1170
           Drive B  |           880           |          2330
           Drive C  |          2943           |          3920
           Drive D  |          4763           |          5701
      
      discard=async:
                       p99 total per minute       p99 total per minute
            Drive   |   extent_commit() (ms)  |    commit_trans() (ms)
          --------------------------------------------------------------
           Drive A  |           134           |           956
           Drive B  |            64           |          1972
           Drive C  |            59           |          1032
           Drive D  |            62           |          1200
      
      While it's not great that the stats are cumulative over 1m, all of these
      servers are running the same workload and and the delta between the two
      are substantial. We are spending significantly less time in
      btrfs_finish_extent_commit() which is responsible for discarding.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDennis Zhou <dennis@kernel.org>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b0643e59
    • Dennis Zhou's avatar
      btrfs: rename DISCARD mount option to to DISCARD_SYNC · 46b27f50
      Dennis Zhou authored
      
      
      This series introduces async discard which will use the flag
      DISCARD_ASYNC, so rename the original flag to DISCARD_SYNC as it is
      synchronously done in transaction commit.
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: default avatarDennis Zhou <dennis@kernel.org>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      46b27f50
    • Josef Bacik's avatar
      btrfs: kill min_allocable_bytes in inc_block_group_ro · f8935566
      Josef Bacik authored
      
      
      This is a relic from a time before we had a proper reservation mechanism
      and you could end up with really full chunks at chunk allocation time.
      This doesn't make sense anymore, so just kill it.
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f8935566
  7. 18 Nov, 2019 13 commits
    • Qu Wenruo's avatar
      btrfs: scrub: Don't check free space before marking a block group RO · b12de528
      Qu Wenruo authored
      
      
      [BUG]
      When running btrfs/072 with only one online CPU, it has a pretty high
      chance to fail:
      
        btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
        - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
            --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
            +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
            @@ -1,2 +1,3 @@
             QA output created by 072
             Silence is golden
            +Scrub find errors in "-m dup -d single" test
            ...
      
      And with the following call trace:
      
        BTRFS info (device dm-5): scrub: started on devid 1
        ------------[ cut here ]------------
        BTRFS: Transaction aborted (error -27)
        WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
        CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
        RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
        Call Trace:
         __btrfs_end_transaction+0xdb/0x310 [btrfs]
         btrfs_end_transaction+0x10/0x20 [btrfs]
         btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
         scrub_enumerate_chunks+0x264/0x940 [btrfs]
         btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
         btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
         do_vfs_ioctl+0x636/0xaa0
         ksys_ioctl+0x67/0x90
         __x64_sys_ioctl+0x43/0x50
         do_syscall_64+0x79/0xe0
         entry_SYSCALL_64_after_hwframe+0x49/0xbe
        ---[ end trace 166c865cec7688e7 ]---
      
      [CAUSE]
      The error number -27 is -EFBIG, returned from the following call chain:
      btrfs_end_transaction()
      |- __btrfs_end_transaction()
         |- btrfs_create_pending_block_groups()
            |- btrfs_finish_chunk_alloc()
               |- btrfs_add_system_chunk()
      
      This happens because we have used up all space of
      btrfs_super_block::sys_chunk_array.
      
      The root cause is, we have the following bad loop of creating tons of
      system chunks:
      
      1. The only SYSTEM chunk is being scrubbed
         It's very common to have only one SYSTEM chunk.
      2. New SYSTEM bg will be allocated
         As btrfs_inc_block_group_ro() will check if we have enough space
         after marking current bg RO. If not, then allocate a new chunk.
      3. New SYSTEM bg is still empty, will be reclaimed
         During the reclaim, we will mark it RO again.
      4. That newly allocated empty SYSTEM bg get scrubbed
         We go back to step 2, as the bg is already mark RO but still not
         cleaned up yet.
      
      If the cleaner kthread doesn't get executed fast enough (e.g. only one
      CPU), then we will get more and more empty SYSTEM chunks, using up all
      the space of btrfs_super_block::sys_chunk_array.
      
      [FIX]
      Since scrub/dev-replace doesn't always need to allocate new extent,
      especially chunk tree extent, so we don't really need to do chunk
      pre-allocation.
      
      To break above spiral, here we introduce a new parameter to
      btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
      need extra chunk pre-allocation.
      
      For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
      @do_chunk_alloc=false.
      This should keep unnecessary empty chunks from popping up for scrub.
      
      Also, since there are two parameters for btrfs_inc_block_group_ro(),
      add more comment for it.
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b12de528
    • David Sterba's avatar
      btrfs: rename btrfs_block_group_cache · 32da5386
      David Sterba authored
      
      
      The type name is misleading, a single entry is named 'cache' while this
      normally means a collection of objects. Rename that everywhere. Also the
      identifier was quite long, making function prototypes harder to format.
      Suggested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      32da5386
    • Qu Wenruo's avatar
      btrfs: block-group: Reuse the item key from caller of read_one_block_group() · d49a2ddb
      Qu Wenruo authored
      
      
      For read_one_block_group(), its only caller has already got the item key
      to search next block group item.
      
      So we can use that key directly without doing our own convertion on
      stack.
      
      Also, since that key used in btrfs_read_block_groups() is vital for
      block group item search, add 'const' keyword for that parameter to
      prevent read_one_block_group() to modify it.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d49a2ddb
    • Qu Wenruo's avatar
      btrfs: block-group: Refactor btrfs_read_block_groups() · ffb9e0f0
      Qu Wenruo authored
      
      
      Refactor the work inside the loop of btrfs_read_block_groups() into one
      separate function, read_one_block_group().
      
      This allows read_one_block_group to be reused for later BG_TREE feature.
      
      The refactor does the following extra fix:
      - Use btrfs_fs_incompat() to replace open-coded feature check
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ffb9e0f0
    • David Sterba's avatar
      btrfs: drop incompat bit for raid1c34 after last block group is gone · 9c907446
      David Sterba authored
      
      
      When there are no raid1c3 or raid1c4 block groups left after balance
      (either convert or with other filters applied), remove the incompat bit.
      This is already done for RAID56, do the same for RAID1C34.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9c907446
    • David Sterba's avatar
      btrfs: add dedicated members for start and length of a block group · b3470b5d
      David Sterba authored
      
      
      The on-disk format of block group item makes use of the key that stores
      the offset and length. This is further used in the code, although this
      makes thing harder to understand. The key is also packed so the
      offset/length is not properly aligned as u64.
      
      Add start (key.objectid) and length (key.offset) members to block group
      and remove the embedded key.  When the item is searched or written, a
      local variable for key is used.
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b3470b5d
    • David Sterba's avatar
      btrfs: rename block_group_item on-stack accessors to follow naming · de0dc456
      David Sterba authored
      
      
      All accessors defined by BTRFS_SETGET_STACK_FUNCS contain _stack_ in the
      name, the block group ones were not following that scheme, so let's
      switch them.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      de0dc456
    • David Sterba's avatar
      btrfs: remove embedded block_group_cache::item · 3d976388
      David Sterba authored
      
      
      The members ::used and ::flags are now in the block group cache
      structure, the last one is chunk_objectid, but that's set to a fixed
      value and otherwise unused. The item is constructed from a local
      variable before write, so we can remove the embedded one from block
      group.
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      3d976388
    • David Sterba's avatar
      btrfs: move block_group_item::flags to block group · f93c63e5
      David Sterba authored
      
      
      The flags are read from the item that's embedded to block group struct,
      but the item will be removed. Use the ::flags after read and before
      write.
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f93c63e5
    • David Sterba's avatar
      btrfs: move block_group_item::used to block group · bf38be65
      David Sterba authored
      
      
      For unknown reasons, the member 'used' in the block group struct is
      stored in the b-tree item and accessed everywhere using the special
      accessor helper. Let's unify it and make it a regular member and only
      update the item before writing it to the tree.
      
      The item is still being used for flags and chunk_objectid, there's some
      duplication until the item is removed in following patches.
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      bf38be65
    • Josef Bacik's avatar
      btrfs: use btrfs_block_group_cache_done in update_block_group · a60adce8
      Josef Bacik authored
      
      
      When free'ing extents in a block group we check to see if the block
      group is not cached, and then cache it if we need to.  However we'll
      just carry on as long as we're loading the cache.  This is problematic
      because we are dirtying the block group here.  If we are fast enough we
      could do a transaction commit and clear the free space cache while we're
      still loading the space cache in another thread.  This truncates the
      free space inode, which will keep it from loading the space cache.
      
      Fix this by using the btrfs_block_group_cache_done helper so that we try
      to load the space cache unconditionally here, which will result in the
      caller waiting for the fast caching to complete and keep us from
      truncating the free space inode.
      
      CC: stable@vger.kernel.org # 4.4+
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a60adce8
    • Marcos Paulo de Souza's avatar
      btrfs: block-group: Rework documentation of check_system_chunk function · a9143bd3
      Marcos Paulo de Souza authored
      Commit 4617ea3a
      
       (" Btrfs: fix necessary chunk tree space calculation
      when allocating a chunk") removed the is_allocation argument from
      check_system_chunk, since the formula for reserving the necessary space
      for allocation or removing a chunk would be the same.
      
      So, rework the comment by removing the mention of is_allocation
      argument.
      Signed-off-by: default avatarMarcos Paulo de Souza <marcos.souza.org@gmail.com>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a9143bd3
    • Omar Sandoval's avatar
      btrfs: get rid of unique workqueue helper functions · a0cac0ec
      Omar Sandoval authored
      Commit 9e0af237
      
       ("Btrfs: fix task hang under heavy compressed
      write") worked around the issue that a recycled work item could get a
      false dependency on the original work item due to how the workqueue code
      guarantees non-reentrancy. It did so by giving different work functions
      to different types of work.
      
      However, the fixes in the previous few patches are more complete, as
      they prevent a work item from being recycled at all (except for a tiny
      window that the kernel workqueue code handles for us). This obsoletes
      the previous fix, so we don't need the unique helpers for correctness.
      The only other reason to keep them would be so they show up in stack
      traces, but they always seem to be optimized to a tail call, so they
      don't show up anyways. So, let's just get rid of the extra indirection.
      
      While we're here, rename normal_work_helper() to the more informative
      btrfs_work_helper().
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a0cac0ec
  8. 11 Oct, 2019 1 commit
  9. 09 Sep, 2019 11 commits