1. 30 Mar, 2020 1 commit
  2. 16 Jan, 2020 1 commit
  3. 17 Nov, 2019 1 commit
    • Sascha Hauer's avatar
      ubi: Fix producing anchor PEBs · f9c34bb5
      Sascha Hauer authored
      When a new fastmap is about to be written UBI must make sure it has a
      free block for a fastmap anchor available. For this ubi_update_fastmap()
      calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08de
      ("ubi: Fix races around ubi_refill_pools()"), with this commit the wear
      leveling code is blocked and can no longer produce free PEBs. UBI then
      more often than not falls back to write the new fastmap anchor to the
      same block it was already on which means the same erase block gets
      erased during each fastmap write and wears out quite fast.
      
      As the locking prevents us from producing the anchor PEB when we
      actually need it, this patch changes the strategy for creating the
      anchor PEB. We no longer create it on demand right before we want to
      write a fastmap, but instead we create an anchor PEB right after we have
      written a fastmap. This gives us enough time to produce a new anchor PEB
      before it is needed. To make sure we have an anchor PEB for the very
      first fastmap write we call ubi_ensure_anchor_pebs() during
      initialisation as well.
      
      Fixes: 2e8f08de ("ubi: Fix races around ubi_refill_pools()")
      Signed-off-by: default avatarSascha Hauer <s.hauer@pengutronix.de>
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      f9c34bb5
  4. 15 Sep, 2019 1 commit
    • Richard Weinberger's avatar
      ubi: Don't do anchor move within fastmap area · 8596813a
      Richard Weinberger authored
      To make sure that Fastmap can use a PEB within the first 64
      PEBs, UBI moves blocks away from that area.
      It uses regular wear-leveling for that job.
      
      An anchor move can be triggered if no PEB is free in this area
      or because of anticipation. In the latter case it can happen
      that UBI decides to move a block but finds a free PEB
      within the same area.
      This case is in vain an increases only erase counters.
      
      Catch this case and cancel wear-leveling if this happens.
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      8596813a
  5. 30 May, 2019 1 commit
  6. 07 May, 2019 1 commit
  7. 05 Mar, 2019 1 commit
  8. 24 Feb, 2019 2 commits
  9. 12 Jun, 2018 1 commit
    • Kees Cook's avatar
      treewide: kzalloc() -> kcalloc() · 6396bb22
      Kees Cook authored
      The kzalloc() function has a 2-factor argument form, kcalloc(). This
      patch replaces cases of:
      
              kzalloc(a * b, gfp)
      
      with:
              kcalloc(a * b, gfp)
      
      as well as handling cases of:
      
              kzalloc(a * b * c, gfp)
      
      with:
      
              kzalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kzalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kzalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kzalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kzalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kzalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kzalloc
      + kcalloc
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kzalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(sizeof(THING) * C2, ...)
      |
        kzalloc(sizeof(TYPE) * C2, ...)
      |
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(C1 * C2, ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      6396bb22
  10. 07 Jun, 2018 1 commit
    • Richard Weinberger's avatar
      ubi: fastmap: Cancel work upon detach · 6e7d8016
      Richard Weinberger authored
      Ben Hutchings pointed out that 29b7a6fa ("ubi: fastmap: Don't flush
      fastmap work on detach") does not really fix the problem, it just
      reduces the risk to hit the race window where fastmap work races against
      free()'ing ubi->volumes[].
      
      The correct approach is making sure that no more fastmap work is in
      progress before we free ubi data structures.
      So we cancel fastmap work right after the ubi background thread is
      stopped.
      By setting ubi->thread_enabled to zero we make sure that no further work
      tries to wake the thread.
      
      Fixes: 29b7a6fa ("ubi: fastmap: Don't flush fastmap work on detach")
      Fixes: 74cdaf24 ("UBI: Fastmap: Fix memory leaks while closing the WL sub-system")
      Cc: stable@vger.kernel.org
      Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
      Cc: Martin Townsend <mtownsend1973@gmail.com>
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      6e7d8016
  11. 18 Jan, 2018 1 commit
  12. 17 Jan, 2018 2 commits
  13. 02 Oct, 2016 3 commits
    • Richard Weinberger's avatar
      ubi: Fix races around ubi_refill_pools() · 2e8f08de
      Richard Weinberger authored
      When writing a new Fastmap the first thing that happens
      is refilling the pools in memory.
      At this stage it is possible that new PEBs from the new pools
      get already claimed and written with data.
      If this happens before the new Fastmap data structure hits the
      flash and we face power cut the freshly written PEB will not
      scanned and unnoticed.
      
      Solve the issue by locking the pools until Fastmap is written.
      
      Cc: <stable@vger.kernel.org>
      Fixes: dbb7d2a8 ("UBI: Add fastmap core")
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      2e8f08de
    • Richard Weinberger's avatar
      ubi: Deal with interrupted erasures in WL · 23654188
      Richard Weinberger authored
      When Fastmap is used we can face here an -EBADMSG
      since Fastmap cannot know about unmaps.
      If the erasure was interrupted the PEB may show ECC
      errors and UBI would go to ro-mode as it assumes
      that the PEB was check during attach time, which is
      not the case with Fastmap.
      
      Cc: <stable@vger.kernel.org>
      Fixes: dbb7d2a8 ("UBI: Add fastmap core")
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      23654188
    • Boris Brezillon's avatar
      UBI: introduce the VID buffer concept · 3291b52f
      Boris Brezillon authored
      Currently, all VID headers are allocated and freed using the
      ubi_zalloc_vid_hdr() and ubi_free_vid_hdr() function. These functions
      make sure to align allocation on ubi->vid_hdr_alsize and adjust the
      vid_hdr pointer to match the ubi->vid_hdr_shift requirements.
      This works fine, but is a bit convoluted.
      Moreover, the future introduction of LEB consolidation (needed to support
      MLC/TLC NANDs) will allows a VID buffer to contain more than one VID
      header.
      
      Hence the creation of a ubi_vid_io_buf struct to attach extra information
      to the VID header.
      
      We currently only store the actual pointer of the underlying buffer, but
      will soon add the number of VID headers contained in the buffer.
      Signed-off-by: default avatarBoris Brezillon <boris.brezillon@free-electrons.com>
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      3291b52f
  14. 29 Jul, 2016 1 commit
  15. 24 May, 2016 1 commit
  16. 10 Jan, 2016 1 commit
  17. 16 Dec, 2015 2 commits
  18. 29 Sep, 2015 1 commit
    • shengyong's avatar
      UBI: return ENOSPC if no enough space available · 7c7feb2e
      shengyong authored
      UBI: attaching mtd1 to ubi0
      UBI: scanning is finished
      UBI error: init_volumes: not enough PEBs, required 706, available 686
      UBI error: ubi_wl_init: no enough physical eraseblocks (-20, need 1)
      UBI error: ubi_attach_mtd_dev: failed to attach mtd1, error -12 <= NOT ENOMEM
      UBI error: ubi_init: cannot attach mtd1
      
      If available PEBs are not enough when initializing volumes, return -ENOSPC
      directly. If available PEBs are not enough when initializing WL, return
      -ENOSPC instead of -ENOMEM.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSheng Yong <shengyong1@huawei.com>
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      Reviewed-by: default avatarDavid Gstir <david@sigma-star.at>
      7c7feb2e
  19. 03 Jun, 2015 1 commit
  20. 26 Mar, 2015 16 commits