Skip to content
Snippets Groups Projects
  1. May 07, 2021
  2. May 05, 2021
    • Pavel Tatashin's avatar
      selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages · e44605a8
      Pavel Tatashin authored
      When pages are pinned they can be faulted in userland and migrated, and
      they can be faulted right in kernel without migration.
      
      In either case, the pinned pages must end-up being pinnable (not
      movable).
      
      Add a new test to gup_test, to help verify that the gup/pup
      (get_user_pages() / pin_user_pages()) behavior with respect to pinnable
      and movable pages is reasonable and correct.  Specifically, provide a
      way to:
      
      1) Verify that only "pinnable" pages are pinned.  This is checked
         automatically for you.
      
      2) Verify that gup/pup performance is reasonable.  This requires
         comparing benchmarks between doing gup/pup on pages that have been
         pre-faulted in from user space, vs.  doing gup/pup on pages that are
         not faulted in until gup/pup time (via FOLL_TOUCH).  This decision is
         controlled with the new -z command line option.
      
      Link: https://lkml.kernel.org/r/20210215161349.246722-15-pasha.tatashin@soleen.com
      
      
      Signed-off-by: default avatarPavel Tatashin <pasha.tatashin@soleen.com>
      Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Tyler Hicks <tyhicks@linux.microsoft.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e44605a8
    • Pavel Tatashin's avatar
      selftests/vm: gup_test: fix test flag · 79dbf135
      Pavel Tatashin authored
      In gup_test both gup_flags and test_flags use the same flags field.
      This is broken.
      
      Farther, in the actual gup_test.c all the passed gup_flags are erased
      and unconditionally replaced with FOLL_WRITE.
      
      Which means that test_flags are ignored, and code like this always
      performs pin dump test:
      
      155  			if (gup->flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN)
      156  				nr = pin_user_pages(addr, nr, gup->flags,
      157  						    pages + i, NULL);
      158  			else
      159  				nr = get_user_pages(addr, nr, gup->flags,
      160  						    pages + i, NULL);
      161  			break;
      
      Add a new test_flags field, to allow raw gup_flags to work.  Add a new
      subcommand for DUMP_USER_PAGES_TEST to specify that pin test should be
      performed.
      
      Remove unconditional overwriting of gup_flags via FOLL_WRITE.  But,
      preserve the previous behaviour where FOLL_WRITE was the default flag,
      and add a new option "-W" to unset FOLL_WRITE.
      
      Rename flags with gup_flags.
      
      With the fix, dump works like this:
      
        root@virtme:/# gup_test  -c
        ---- page #0, starting from user virt addr: 0x7f8acb9e4000
        page:00000000d3d2ee27 refcount:2 mapcount:1 mapping:0000000000000000
        index:0x0 pfn:0x100bcf
        anon flags: 0x300000000080016(referenced|uptodate|lru|swapbacked)
        raw: 0300000000080016 ffffd0e204021608 ffffd0e208df2e88 ffff8ea04243ec61
        raw: 0000000000000000 0000000000000000 0000000200000000 0000000000000000
        page dumped because: gup_test: dump_pages() test
        DUMP_USER_PAGES_TEST: done
      
        root@virtme:/# gup_test  -c -p
        ---- page #0, starting from user virt addr: 0x7fd19701b000
        page:00000000baed3c7d refcount:1025 mapcount:1 mapping:0000000000000000
        index:0x0 pfn:0x108008
        anon flags: 0x300000000080014(uptodate|lru|swapbacked)
        raw: 0300000000080014 ffffd0e204200188 ffffd0e205e09088 ffff8ea04243ee71
        raw: 0000000000000000 0000000000000000 0000040100000000 0000000000000000
        page dumped because: gup_test: dump_pages() test
        DUMP_USER_PAGES_TEST: done
      
      Refcount shows the difference between pin vs no-pin case.
      Also change type of nr from int to long, as it counts number of pages.
      
      Link: https://lkml.kernel.org/r/20210215161349.246722-14-pasha.tatashin@soleen.com
      
      
      Signed-off-by: default avatarPavel Tatashin <pasha.tatashin@soleen.com>
      Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sashal@kernel.org>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Tyler Hicks <tyhicks@linux.microsoft.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      79dbf135
    • Axel Rasmussen's avatar
      userfaultfd/selftests: add test exercising minor fault handling · f0fa9433
      Axel Rasmussen authored
      Fix a dormant bug in userfaultfd_events_test(), where we did `return
      faulting_process(0)` instead of `exit(faulting_process(0))`.  This
      caused the forked process to keep running, trying to execute any further
      test cases after the events test in parallel with the "real" process.
      
      Add a simple test case which exercises minor faults.  In short, it does
      the following:
      
      1. "Sets up" an area (area_dst) and a second shared mapping to the same
         underlying pages (area_dst_alias).
      
      2. Register one of these areas with userfaultfd, in minor fault mode.
      
      3. Start a second thread to handle any minor faults.
      
      4. Populate the underlying pages with the non-UFFD-registered side of
         the mapping. Basically, memset() each page with some arbitrary
         contents.
      
      5. Then, using the UFFD-registered mapping, read all of the page
         contents, asserting that the contents match expectations (we expect
         the minor fault handling thread can modify the page contents before
         resolving the fault).
      
      The minor fault handling thread, upon receiving an event, flips all the
      bits (~) in that page, just to prove that it can modify it in some
      arbitrary way.  Then it issues a UFFDIO_CONTINUE ioctl, to setup the
      mapping and resolve the fault.  The reading thread should wake up and
      see this modification.
      
      Currently the minor fault test is only enabled in hugetlb_shared mode,
      as this is the only configuration the kernel feature supports.
      
      Link: https://lkml.kernel.org/r/20210301222728.176417-7-axelrasmussen@google.com
      
      
      Signed-off-by: default avatarAxel Rasmussen <axelrasmussen@google.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Adam Ruprecht <ruprecht@google.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Cannon Matthews <cannonmatthews@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chinwen Chang <chinwen.chang@mediatek.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: "Michal Koutn" <mkoutny@suse.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Oliver Upton <oupton@google.com>
      Cc: Shaohua Li <shli@fb.com>
      Cc: Shawn Anastasio <shawn@anastas.io>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f0fa9433
    • Zi Yan's avatar
      mm: huge_memory: debugfs for file-backed THP split · fbe37501
      Zi Yan authored
      Further extend <debugfs>/split_huge_pages to accept
      "<path>,<pgoff_start>,<pgoff_end>" for file-backed THP split tests since
      tmpfs may have file backed by THP that mapped nowhere.
      
      Update selftest program to test file-backed THP split too.
      
      Link: https://lkml.kernel.org/r/20210331235309.332292-2-zi.yan@sent.com
      
      
      Signed-off-by: default avatarZi Yan <ziy@nvidia.com>
      Suggested-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Mika Penttila <mika.penttila@nextfour.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fbe37501
    • Zi Yan's avatar
      mm: huge_memory: a new debugfs interface for splitting THP tests · fa6c0231
      Zi Yan authored
      We did not have a direct user interface of splitting the compound page
      backing a THP and there is no need unless we want to expose the THP
      implementation details to users.  Make <debugfs>/split_huge_pages accept a
      new command to do that.
      
      By writing "<pid>,<vaddr_start>,<vaddr_end>" to
      <debugfs>/split_huge_pages, THPs within the given virtual address range
      from the process with the given pid are split. It is used to test
      split_huge_page function. In addition, a selftest program is added to
      tools/testing/selftests/vm to utilize the interface by splitting
      PMD THPs and PTE-mapped THPs.
      
      This does not change the old behavior, i.e., writing 1 to the interface
      to split all THPs in the system.
      
      Link: https://lkml.kernel.org/r/20210331235309.332292-1-zi.yan@sent.com
      
      
      Signed-off-by: default avatarZi Yan <ziy@nvidia.com>
      Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mika Penttila <mika.penttila@nextfour.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fa6c0231
  3. Apr 30, 2021
  4. Apr 27, 2021
    • Pedro Tammela's avatar
    • Daniel Borkmann's avatar
      bpf: Fix propagation of 32 bit unsigned bounds from 64 bit bounds · 10bf4e83
      Daniel Borkmann authored
      
      Similarly as b0270958 ("bpf: Fix propagation of 32-bit signed bounds
      from 64-bit bounds."), we also need to fix the propagation of 32 bit
      unsigned bounds from 64 bit counterparts. That is, really only set the
      u32_{min,max}_value when /both/ {umin,umax}_value safely fit in 32 bit
      space. For example, the register with a umin_value == 1 does /not/ imply
      that u32_min_value is also equal to 1, since umax_value could be much
      larger than 32 bit subregister can hold, and thus u32_min_value is in
      the interval [0,1] instead.
      
      Before fix, invalid tracking result of R2_w=inv1:
      
        [...]
        5: R0_w=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0) R10=fp0
        5: (35) if r2 >= 0x1 goto pc+1
        [...] // goto path
        7: R0=inv1337 R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,umin_value=1) R10=fp0
        7: (b6) if w2 <= 0x1 goto pc+1
        [...] // goto path
        9: R0=inv1337 R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,smin_value=-9223372036854775807,smax_value=9223372032559808513,umin_value=1,umax_value=18446744069414584321,var_off=(0x1; 0xffffffff00000000),s32_min_value=1,s32_max_value=1,u32_max_value=1) R10=fp0
        9: (bc) w2 = w2
        10: R0=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv1 R10=fp0
        [...]
      
      After fix, correct tracking result of R2_w=inv(id=0,umax_value=1,var_off=(0x0; 0x1)):
      
        [...]
        5: R0_w=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0) R10=fp0
        5: (35) if r2 >= 0x1 goto pc+1
        [...] // goto path
        7: R0=inv1337 R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,umin_value=1) R10=fp0
        7: (b6) if w2 <= 0x1 goto pc+1
        [...] // goto path
        9: R0=inv1337 R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,smax_value=9223372032559808513,umax_value=18446744069414584321,var_off=(0x0; 0xffffffff00000001),s32_min_value=0,s32_max_value=1,u32_max_value=1) R10=fp0
        9: (bc) w2 = w2
        10: R0=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0,umax_value=1,var_off=(0x0; 0x1)) R10=fp0
        [...]
      
      Thus, same issue as in b0270958 holds for unsigned subregister tracking.
      Also, align __reg64_bound_u32() similarly to __reg64_bound_s32() as done in
      b0270958 to make them uniform again.
      
      Fixes: 3f50f132 ("bpf: Verifier, do explicit ALU32 bounds tracking")
      Reported-by: default avatarManfred Paul <(@_manfp)>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      10bf4e83
    • Andrii Nakryiko's avatar
      selftests/bpf: Fix core_reloc test runner · bede0ebf
      Andrii Nakryiko authored
      
      Fix failed tests checks in core_reloc test runner, which allowed failing tests
      to pass quietly. Also add extra check to make sure that expected to fail test cases with
      invalid names are caught as test failure anyway, as this is not an expected
      failure mode. Also fix mislabeled probed vs direct bitfield test cases.
      
      Fixes: 124a892d ("selftests/bpf: Test TYPE_EXISTS and TYPE_SIZE CO-RE relocations")
      Reported-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20210426192949.416837-6-andrii@kernel.org
      bede0ebf
    • Andrii Nakryiko's avatar
      selftests/bpf: Fix field existence CO-RE reloc tests · 5a30eb23
      Andrii Nakryiko authored
      
      Negative field existence cases for have a broken assumption that FIELD_EXISTS
      CO-RE relo will fail for fields that match the name but have incompatible type
      signature. That's not how CO-RE relocations generally behave. Types and fields
      that match by name but not by expected type are treated as non-matching
      candidates and are skipped. Error later is reported if no matching candidate
      was found. That's what happens for most relocations, but existence relocations
      (FIELD_EXISTS and TYPE_EXISTS) are more permissive and they are designed to
      return 0 or 1, depending if a match is found. This allows to handle
      name-conflicting but incompatible types in BPF code easily. Combined with
      ___flavor suffixes, it's possible to handle pretty much any structural type
      changes in kernel within the compiled once BPF source code.
      
      So, long story short, negative field existence test cases are invalid in their
      assumptions, so this patch reworks them into a single consolidated positive
      case that doesn't match any of the fields.
      
      Fixes: c7566a69 ("selftests/bpf: Add field existence CO-RE relocs tests")
      Reported-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20210426192949.416837-5-andrii@kernel.org
      5a30eb23
    • Andrii Nakryiko's avatar
      selftests/bpf: Fix BPF_CORE_READ_BITFIELD() macro · 0f20615d
      Andrii Nakryiko authored
      
      Fix BPF_CORE_READ_BITFIELD() macro used for reading CO-RE-relocatable
      bitfields. Missing breaks in a switch caused 8-byte reads always. This can
      confuse libbpf because it does strict checks that memory load size corresponds
      to the original size of the field, which in this case quite often would be
      wrong.
      
      After fixing that, we run into another problem, which quite subtle, so worth
      documenting here. The issue is in Clang optimization and CO-RE relocation
      interactions. Without that asm volatile construct (also known as
      barrier_var()), Clang will re-order BYTE_OFFSET and BYTE_SIZE relocations and
      will apply BYTE_OFFSET 4 times for each switch case arm. This will result in
      the same error from libbpf about mismatch of memory load size and original
      field size. I.e., if we were reading u32, we'd still have *(u8 *), *(u16 *),
      *(u32 *), and *(u64 *) memory loads, three of which will fail. Using
      barrier_var() forces Clang to apply BYTE_OFFSET relocation first (and once) to
      calculate p, after which value of p is used without relocation in each of
      switch case arms, doing appropiately-sized memory load.
      
      Here's the list of relevant relocations and pieces of generated BPF code
      before and after this patch for test_core_reloc_bitfields_direct selftests.
      
      BEFORE
      =====
       #45: core_reloc: insn #160 --> [5] + 0:5: byte_sz --> struct core_reloc_bitfields.u32
       #46: core_reloc: insn #167 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #47: core_reloc: insn #174 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #48: core_reloc: insn #178 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #49: core_reloc: insn #182 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
      
           157:       18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0 ll
           159:       7b 12 20 01 00 00 00 00 *(u64 *)(r2 + 288) = r1
           160:       b7 02 00 00 04 00 00 00 r2 = 4
      ; BYTE_SIZE relocation here                 ^^^
           161:       66 02 07 00 03 00 00 00 if w2 s> 3 goto +7 <LBB0_63>
           162:       16 02 0d 00 01 00 00 00 if w2 == 1 goto +13 <LBB0_65>
           163:       16 02 01 00 02 00 00 00 if w2 == 2 goto +1 <LBB0_66>
           164:       05 00 12 00 00 00 00 00 goto +18 <LBB0_69>
      
      0000000000000528 <LBB0_66>:
           165:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           167:       69 11 08 00 00 00 00 00 r1 = *(u16 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ WRONG size        ^^^^^^^^^^^^^^^^
           168:       05 00 0e 00 00 00 00 00 goto +14 <LBB0_69>
      
      0000000000000548 <LBB0_63>:
           169:       16 02 0a 00 04 00 00 00 if w2 == 4 goto +10 <LBB0_67>
           170:       16 02 01 00 08 00 00 00 if w2 == 8 goto +1 <LBB0_68>
           171:       05 00 0b 00 00 00 00 00 goto +11 <LBB0_69>
      
      0000000000000560 <LBB0_68>:
           172:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           174:       79 11 08 00 00 00 00 00 r1 = *(u64 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ WRONG size        ^^^^^^^^^^^^^^^^
           175:       05 00 07 00 00 00 00 00 goto +7 <LBB0_69>
      
      0000000000000580 <LBB0_65>:
           176:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           178:       71 11 08 00 00 00 00 00 r1 = *(u8 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ WRONG size        ^^^^^^^^^^^^^^^^
           179:       05 00 03 00 00 00 00 00 goto +3 <LBB0_69>
      
      00000000000005a0 <LBB0_67>:
           180:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
           182:       61 11 08 00 00 00 00 00 r1 = *(u32 *)(r1 + 8)
      ; BYTE_OFFSET relo here w/ RIGHT size        ^^^^^^^^^^^^^^^^
      
      00000000000005b8 <LBB0_69>:
           183:       67 01 00 00 20 00 00 00 r1 <<= 32
           184:       b7 02 00 00 00 00 00 00 r2 = 0
           185:       16 02 02 00 00 00 00 00 if w2 == 0 goto +2 <LBB0_71>
           186:       c7 01 00 00 20 00 00 00 r1 s>>= 32
           187:       05 00 01 00 00 00 00 00 goto +1 <LBB0_72>
      
      00000000000005e0 <LBB0_71>:
           188:       77 01 00 00 20 00 00 00 r1 >>= 32
      
      AFTER
      =====
      
       #30: core_reloc: insn #132 --> [5] + 0:5: byte_off --> struct core_reloc_bitfields.u32
       #31: core_reloc: insn #134 --> [5] + 0:5: byte_sz --> struct core_reloc_bitfields.u32
      
           129:       18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0 ll
           131:       7b 12 20 01 00 00 00 00 *(u64 *)(r2 + 288) = r1
           132:       b7 01 00 00 08 00 00 00 r1 = 8
      ; BYTE_OFFSET relo here                     ^^^
      ; no size check for non-memory dereferencing instructions
           133:       0f 12 00 00 00 00 00 00 r2 += r1
           134:       b7 03 00 00 04 00 00 00 r3 = 4
      ; BYTE_SIZE relocation here                 ^^^
           135:       66 03 05 00 03 00 00 00 if w3 s> 3 goto +5 <LBB0_63>
           136:       16 03 09 00 01 00 00 00 if w3 == 1 goto +9 <LBB0_65>
           137:       16 03 01 00 02 00 00 00 if w3 == 2 goto +1 <LBB0_66>
           138:       05 00 0a 00 00 00 00 00 goto +10 <LBB0_69>
      
      0000000000000458 <LBB0_66>:
           139:       69 21 00 00 00 00 00 00 r1 = *(u16 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
           140:       05 00 08 00 00 00 00 00 goto +8 <LBB0_69>
      
      0000000000000468 <LBB0_63>:
           141:       16 03 06 00 04 00 00 00 if w3 == 4 goto +6 <LBB0_67>
           142:       16 03 01 00 08 00 00 00 if w3 == 8 goto +1 <LBB0_68>
           143:       05 00 05 00 00 00 00 00 goto +5 <LBB0_69>
      
      0000000000000480 <LBB0_68>:
           144:       79 21 00 00 00 00 00 00 r1 = *(u64 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
           145:       05 00 03 00 00 00 00 00 goto +3 <LBB0_69>
      
      0000000000000490 <LBB0_65>:
           146:       71 21 00 00 00 00 00 00 r1 = *(u8 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
           147:       05 00 01 00 00 00 00 00 goto +1 <LBB0_69>
      
      00000000000004a0 <LBB0_67>:
           148:       61 21 00 00 00 00 00 00 r1 = *(u32 *)(r2 + 0)
      ; NO CO-RE relocation here                   ^^^^^^^^^^^^^^^^
      
      00000000000004a8 <LBB0_69>:
           149:       67 01 00 00 20 00 00 00 r1 <<= 32
           150:       b7 02 00 00 00 00 00 00 r2 = 0
           151:       16 02 02 00 00 00 00 00 if w2 == 0 goto +2 <LBB0_71>
           152:       c7 01 00 00 20 00 00 00 r1 s>>= 32
           153:       05 00 01 00 00 00 00 00 goto +1 <LBB0_72>
      
      00000000000004d0 <LBB0_71>:
           154:       77 01 00 00 20 00 00 00 r1 >>= 323
      
      Fixes: ee26dade ("libbpf: Add support for relocatable bitfields")
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20210426192949.416837-4-andrii@kernel.org
      0f20615d
    • Andrii Nakryiko's avatar
      libbpf: Support BTF_KIND_FLOAT during type compatibility checks in CO-RE · 6709a914
      Andrii Nakryiko authored
      
      Add BTF_KIND_FLOAT support when doing CO-RE field type compatibility check.
      Without this, relocations against float/double fields will fail.
      
      Also adjust one error message to emit instruction index instead of less
      convenient instruction byte offset.
      
      Fixes: 22541a9e ("libbpf: Add BTF_KIND_FLOAT support")
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20210426192949.416837-3-andrii@kernel.org
      6709a914
    • Andrii Nakryiko's avatar
      selftests/bpf: Add remaining ASSERT_xxx() variants · 7a2fa70a
      Andrii Nakryiko authored
      
      Add ASSERT_TRUE/ASSERT_FALSE for conditions calculated with custom logic to
      true/false. Also add remaining arithmetical assertions:
        - ASSERT_LE -- less than or equal;
        - ASSERT_GT -- greater than;
        - ASSERT_GE -- greater than or equal.
      This should cover most scenarios where people fall back to error-prone
      CHECK()s.
      
      Also extend ASSERT_ERR() to print out errno, in addition to direct error.
      
      Also convert few CHECK() instances to ensure new ASSERT_xxx() variants work as
      expected. Subsequent patch will also use ASSERT_TRUE/ASSERT_FALSE more
      extensively.
      
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Link: https://lore.kernel.org/bpf/20210426192949.416837-2-andrii@kernel.org
      7a2fa70a
  5. Apr 26, 2021
  6. Apr 24, 2021
    • Masahiro Yamada's avatar
      tools: do not include scripts/Kbuild.include · b61442df
      Masahiro Yamada authored
      Since commit 57fd251c ("kbuild: split cc-option and friends to
      scripts/Makefile.compiler"), some kselftests fail to build.
      
      The tools/ directory opted out Kbuild, and went in a different
      direction. People copied scripts and Makefiles to the tools/ directory
      to create their own build system.
      
      tools/build/Build.include mimics scripts/Kbuild.include, but some
      tool Makefiles include the Kbuild one to import a feature that is
      missing in tools/build/Build.include:
      
       - Commit ec04aa3a ("tools/thermal: tmon: use "-fstack-protector"
         only if supported") included scripts/Kbuild.include from
         tools/thermal/tmon/Makefile to import the cc-option macro.
      
       - Commit c2390f16 ("selftests: kvm: fix for compilers that do
         not support -no-pie") included scripts/Kbuild.include from
         tools/testing/selftests/kvm/Makefile to import the try-run macro.
      
       - Commit 9cae4ace ("selftests/bpf: do not ignore clang
         failures") included scripts/Kbuild.include from
         tools/testing/selftests/bpf/Makefile to import the .DELETE_ON_ERROR
         target.
      
       - Commit 0695f8bc ("selftests/powerpc: Handle Makefile for
         unrecognized option") included scripts/Kbuild.include from
         tools/testing/selftests/powerpc/pmu/ebb/Makefile to import the
         try-run macro.
      
      Copy what they need into tools/build/Build.include, and make them
      include it instead of scripts/Kbuild.include.
      
      Link: https://lore.kernel.org/lkml/86dadf33-70f7-a5ac-cb8c-64966d2f45a1@linux.ibm.com/
      
      
      Fixes: 57fd251c ("kbuild: split cc-option and friends to scripts/Makefile.compiler")
      Reported-by: default avatarJanosch Frank <frankja@linux.ibm.com>
      Reported-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      Tested-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      b61442df
  7. Apr 23, 2021
Loading