- Nov 30, 2020
-
-
Björn Töpel authored
Start using recvfrom() the rxdrop scenario. Signed-off-by:
Björn Töpel <bjorn.topel@intel.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20201130185205.196029-8-bjorn.topel@gmail.com
-
- Nov 27, 2020
-
-
Daniel T. Lee authored
Numerous refactoring that rewrites BPF programs written with bpf_load to use the libbpf loader was finally completed, resulting in BPF programs using bpf_load within the kernel being completely no longer present. This commit removes bpf_load, an outdated bpf loader that is difficult to keep up with the latest kernel BPF and causes confusion. Also, this commit removes the unused trace_helper and bpf_load from samples/bpf target objects from Makefile. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Jesper Dangaard Brouer <brouer@redhat.com> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-8-danieltimlee@gmail.com
-
Daniel T. Lee authored
Currently, lwt_len_hist's map lwt_len_hist_map is uses pinning, and the map isn't cleared on test end. This leds to reuse of that map for each test, which prevents the results of the test from being accurate. This commit fixes the problem by removing of pinned map from bpffs. Also, this commit add the executable permission to shell script files. Fixes: f74599f7 ("bpf: Add tests and samples for LWT-BPF") Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-7-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit refactors the existing program with libbpf bpf loader. Since the kprobe, tracepoint and raw_tracepoint bpf program can be attached with single bpf_program__attach() interface, so the corresponding function of libbpf is used here. Rather than specifying the number of cpus inside the code, this commit uses the number of available cpus with _SC_NPROCESSORS_ONLN. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-6-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit refactors the existing ibumad program with libbpf bpf loader. Attach/detach of Tracepoint bpf programs has been managed with the generic bpf_program__attach() and bpf_link__destroy() from the libbpf. Also, instead of using the previous BPF MAP definition, this commit refactors ibumad MAP definition with the new BTF-defined MAP format. To verify that this bpf program works without an infiniband device, try loading ib_umad kernel module and test the program as follows: # modprobe ib_umad # ./ibumad Moreover, TRACE_HELPERS has been removed from the Makefile since it is not used on this program. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-5-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit refactors the existing kprobe program with libbpf bpf loader. To attach bpf program, this uses generic bpf_program__attach() approach rather than using bpf_load's load_bpf_file(). To attach bpf to perf_event, instead of using previous ioctl method, this commit uses bpf_program__attach_perf_event since it manages the enable of perf_event and attach of BPF programs to it, which is much more intuitive way to achieve. Also, explicit close(fd) has been removed since event will be closed inside bpf_link__destroy() automatically. Furthermore, to prevent conflict of same named uprobe events, O_TRUNC flag has been used to clear 'uprobe_events' interface. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-4-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit refactors the existing cgroup program with libbpf bpf loader. The original test_cgrp2_sock2 has keeped the bpf program attached to the cgroup hierarchy even after the exit of user program. To implement the same functionality with libbpf, this commit uses the BPF_LINK_PINNING to pin the link attachment even after it is closed. Since this uses LINK instead of ATTACH, detach of bpf program from cgroup with 'test_cgrp2_sock' is not used anymore. The code to mount the bpf was added to the .sh file in case the bpff was not mounted on /sys/fs/bpf. Additionally, to fix the problem that shell script cannot find the binary object from the current path, relative path './' has been added in front of binary. Fixes: 554ae6e7 ("samples/bpf: add userspace example for prohibiting sockets") Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-3-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit refactors the existing cgroup programs with libbpf bpf loader. Since bpf_program__attach doesn't support cgroup program attachment, this explicitly attaches cgroup bpf program with bpf_program__attach_cgroup(bpf_prog, cg1). Also, to change attach_type of bpf program, this uses libbpf's bpf_program__set_expected_attach_type helper to switch EGRESS to INGRESS. To keep bpf program attached to the cgroup hierarchy even after the exit, this commit uses the BPF_LINK_PINNING to pin the link attachment even after it is closed. Besides, this program was broken due to the typo of BPF MAP definition. But this commit solves the problem by fixing this from 'queue_stats' map struct hvm_queue_stats -> hbm_queue_stats. Fixes: 36b5d471 ("selftests/bpf: samples/bpf: Split off legacy stuff from bpf_helpers.h") Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-2-danieltimlee@gmail.com
-
- Nov 17, 2020
-
-
Magnus Karlsson authored
Increment the statistics over how many Tx packets have been sent at the time of sending instead of at the time of completion. This as a completion event means that the buffer has been sent AND returned to user space. The packet always gets sent shortly after sendto() is called. The kernel might, for performance reasons, decide to not return every single buffer to user space immediately after sending, for example, only after a batch of packets have been transmitted. Incrementing the number of packets sent at completion, will in that case be confusing as if you send a single packet, the counter might show zero for a while even though the packet has been transmitted. Signed-off-by:
Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/1605525167-14450-2-git-send-email-magnus.karlsson@gmail.com
-
- Nov 10, 2020
-
-
Hangbin Liu authored
The tcbpf2_kern.o and related kernel sections are moved to bpf selftest folder since b05cd740 ("samples/bpf: remove the bpf tunnel testsuite."). Remove this one as well. Fixes: b05cd740 ("samples/bpf: remove the bpf tunnel testsuite.") Signed-off-by:
Hangbin Liu <liuhangbin@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20201110015013.1570716-3-liuhangbin@gmail.com
-
- Nov 09, 2020
-
-
Menglong Dong authored
The 'bpf/bpf.h' include in 'samples/bpf/hbm.c' is duplicated. Signed-off-by:
Menglong Dong <dong.menglong@zte.com.cn> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/1604654034-52821-1-git-send-email-dong.menglong@zte.com.cn
-
- Oct 28, 2020
-
-
Sudeep Dutt authored
This patch removes the MIC drivers from the kernel tree since the corresponding devices have been discontinued. Removing the dma and char-misc changes in one patch and merging via the char-misc tree is best to avoid any potential build breakage. Cc: Nikhil Rao <nikhil.rao@intel.com> Reviewed-by:
Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by:
Sudeep Dutt <sudeep.dutt@intel.com> Acked-By:
Vinod Koul <vkoul@kernel.org> Reviewed-by:
Sherry Sun <sherry.sun@nxp.com> Link: https://lore.kernel.org/r/8c1443136563de34699d2c084df478181c205db4.1603854416.git.sudeep.dutt@intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Oct 27, 2020
-
-
Toke Høiland-Jørgensen authored
The memlock rlimit is a notorious source of failure for BPF programs. Most of the samples just set it to infinity, but a few used a lower limit. The problem with unconditionally setting a lower limit is that this will also override the limit if the system-wide setting is *higher* than the limit being set, which can lead to failures on systems that lock a lot of memory, but set 'ulimit -l' to unlimited before running a sample. One fix for this is to only conditionally set the limit if the current limit is lower, but it is simpler to just unify all the samples and have them all set the limit to infinity. Signed-off-by:
Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20201026233623.91728-1-toke@redhat.com
-
- Oct 21, 2020
-
-
Daniel Borkmann authored
Yaniv reported a compilation error after pulling latest libbpf: [...] ../libbpf/src/root/usr/include/bpf/bpf_helpers.h:99:10: error: unknown register name 'r0' in asm : "r0", "r1", "r2", "r3", "r4", "r5"); [...] The issue got triggered given Yaniv was compiling tracing programs with native target (e.g. x86) instead of BPF target, hence no BTF generated vmlinux.h nor CO-RE used, and later llc with -march=bpf was invoked to compile from LLVM IR to BPF object file. Given that clang was expecting x86 inline asm and not BPF one the error complained that these regs don't exist on the former. Guard bpf_tail_call_static() with defined(__bpf__) where BPF inline asm is valid to use. BPF tracing programs on more modern kernels use BPF target anyway and thus the bpf_tail_call_static() function will be available for them. BPF inline asm is supported since clang 7 (clang <= 6 otherwise throws same above error), and __bpf_unreachable() since clang 8, therefore include the latter condition in order to prevent compilation errors for older clang versions. Given even an old Ubuntu 18.04 LTS has official LLVM packages all the way up to llvm-10, I did not bother to special case the __bpf_unreachable() inside bpf_tail_call_static() further. Also, undo the sockex3_kern's use of bpf_tail_call_static() sample given they still have the old hacky way to even compile networking progs with native instead of BPF target so bpf_tail_call_static() won't be defined there anymore. Fixes: 0e9f6841 ("bpf, libbpf: Add bpf_tail_call_static helper for bpf programs") Reported-by:
Yaniv Agman <yanivagman@gmail.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Yonghong Song <yhs@fb.com> Tested-by:
Yaniv Agman <yanivagman@gmail.com> Link: https://lore.kernel.org/bpf/CAMy7=ZUk08w5Gc2Z-EKi4JFtuUCaZYmE4yzhJjrExXpYKR4L8w@mail.gmail.com Link: https://lore.kernel.org/bpf/20201021203257.26223-1-daniel@iogearbox.net
-
- Oct 14, 2020
-
-
Hui Su authored
kmemleak-test.c is just a kmemleak test module, which also can not be used as a built-in kernel module. Thus, i think it may should not be in mm dir, and move the kmemleak-test.c to samples/kmemleak/kmemleak-test.c. Fix the spelling of built-in by the way. Signed-off-by:
Hui Su <sh_def@163.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Cc: David S. Miller <davem@davemloft.net> Cc: Rob Herring <robh@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Divya Indi <divya.indi@oracle.com> Cc: Tomas Winkler <tomas.winkler@intel.com> Cc: David Howells <dhowells@redhat.com> Link: https://lkml.kernel.org/r/20200925183729.GA172837@rlk Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Oct 11, 2020
-
-
Daniel T. Lee authored
Most of the samples were converted to use the new BTF-defined MAP as they moved to libbpf, but some of the samples were missing. Instead of using the previous BPF MAP definition, this commit refactors xdp_monitor and xdp_sample_pkts_kern MAP definition with the new BTF-defined MAP format. Also, this commit removes the max_entries attribute at PERF_EVENT_ARRAY map type. The libbpf's bpf_object__create_map() will automatically set max_entries to the maximum configured number of CPUs on the host. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-4-danieltimlee@gmail.com
-
Daniel T. Lee authored
>From commit d7a18ea7 ("libbpf: Add generic bpf_program__attach()"), for some BPF programs, it is now possible to attach BPF programs with __attach() instead of explicitly calling __attach_<type>(). This commit refactors the __attach_tracepoint() with libbpf's generic __attach() method. In addition, this refactors the logic of setting the map FD to simplify the code. Also, the missing removal of bpf_load.o in Makefile has been fixed. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-3-danieltimlee@gmail.com
-
Daniel T. Lee authored
To avoid confusion caused by the increasing fragmentation of the BPF Loader program, this commit would like to change to the libbpf loader instead of using the bpf_load. Thanks to libbpf's bpf_link interface, managing the tracepoint BPF program is much easier. bpf_program__attach_tracepoint manages the enable of tracepoint event and attach of BPF programs to it with a single interface bpf_link, so there is no need to manage event_fd and prog_fd separately. This commit refactors xdp_monitor with using this libbpf API, and the bpf_load is removed and migrated to libbpf. Signed-off-by:
Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-2-danieltimlee@gmail.com
-
- Oct 07, 2020
-
-
Bartosz Golaszewski authored
pr_*() printing helpers are preferred over using bare printk(). Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
The copyright notice alarms checkpatch.pl of usin spaces before tabs. Fix this. Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
Move local variables of the same type into a single line for better readability. Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
The structure containing the storeme field is allocated using kzalloc(). There's no need to set it to 0 again. Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
simple_strtoul() is deprecated. Use kstrtoint(). Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
Aling the assignment of a static structure's field to be consistent with all other instances. Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
Checking pointers for NULL value before passing them to container_of() is pointless because even if we return NULL from the ternary operator, none of the users checks the returned value but they instead dereference it unconditionally. AFAICT this cannot really happen either. Simplify the code by removing the ternary operators from to_childless() et al. Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Bartosz Golaszewski authored
There's no need for suplemental newlines in the source file - especially since the examples are well divided with comments already. Signed-off-by:
Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- Oct 06, 2020
-
-
Ciara Loftus authored
Add an option to count the number of interrupts generated per second and total number of interrupts during the lifetime of the application for a given interface. This information is extracted from /proc/interrupts. Since there is no naming convention across drivers, the user must provide the string which is specific to their interface in the /proc/interrupts file on the command line. Usage: ./xdpsock ... -I <irq_str> eg. for queue 0 of i40e device eth0: ./xdpsock ... -I i40e-eth0-TxRx-0 Signed-off-by:
Ciara Loftus <ciara.loftus@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201002133612.31536-3-ciara.loftus@intel.com
-
Ciara Loftus authored
Categorise and record syscalls issued in the xdpsock sample app. The categories recorded are: rx_empty_polls: polls when the rx ring is empty fill_fail_polls: polls when failed to get addr from fill ring copy_tx_sendtos: sendtos issued for tx when copy mode enabled tx_wakeup_sendtos: sendtos issued when tx ring needs waking up opt_polls: polls issued since the '-p' flag is set Print the stats using '-a' on the xdpsock command line. Signed-off-by:
Ciara Loftus <ciara.loftus@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201002133612.31536-2-ciara.loftus@intel.com
-
Ciara Loftus authored
New statistics will be added in future commits. In preparation for this, let's split out the existing statistics into their own struct. Signed-off-by:
Ciara Loftus <ciara.loftus@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201002133612.31536-1-ciara.loftus@intel.com
-
Yonghong Song authored
Compiling samples/bpf hits an error related to fallthrough marking. ... CC samples/bpf/hbm.o samples/bpf/hbm.c: In function ‘main’: samples/bpf/hbm.c:486:4: error: ‘fallthrough’ undeclared (first use in this function) fallthrough; ^~~~~~~~~~~ The "fallthrough" is not defined under tools/include directory. Rather, it is "__fallthrough" is defined in linux/compiler.h. Including "linux/compiler.h" and using "__fallthrough" fixed the issue. Signed-off-by:
Yonghong Song <yhs@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201006043427.1891805-1-yhs@fb.com
-
Yonghong Song authored
With latest llvm trunk, bpf programs under samples/bpf directory, if using CORE, may experience the following errors: LLVM ERROR: Cannot select: intrinsic %llvm.preserve.struct.access.index PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace. Stack dump: 0. Program arguments: llc -march=bpf -filetype=obj -o samples/bpf/test_probe_write_user_kern.o 1. Running pass 'Function Pass Manager' on module '<stdin>'. 2. Running pass 'BPF DAG->DAG Pattern Instruction Selection' on function '@bpf_prog1' #0 0x000000000183c26c llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x183c26c) ... #7 0x00000000017c375e (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x17c375e) #8 0x00000000016a75c5 llvm::SelectionDAGISel::CannotYetSelect(llvm::SDNode*) (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x16a75c5) #9 0x00000000016ab4f8 llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int) (/data/users/yhs/work/llvm-project/llvm/build.cur/install/bin/llc+0x16ab4f8) ... Aborted (core dumped) | llc -march=bpf -filetype=obj -o samples/bpf/test_probe_write_user_kern.o The reason is due to llvm change https://reviews.llvm.org/D87153 where the CORE relocation global generation is moved from the beginning of target dependent optimization (llc) to the beginning of target independent optimization (opt). Since samples/bpf programs did not use vmlinux.h and its clang compilation uses native architecture, we need to adjust arch triple at opt level to do CORE relocation global generation properly. Otherwise, the above error will appear. This patch fixed the issue by introduce opt and llvm-dis to compilation chain, which will do proper CORE relocation global generation as well as O2 level optimization. Tested with llvm10, llvm11 and trunk/llvm12. Signed-off-by:
Yonghong Song <yhs@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20201006043427.1891742-1-yhs@fb.com
-
- Oct 02, 2020
-
-
Sherry Sun authored
Since struct _mic_vring_info and vring are allocated together and follow vring, if the vring_size() is not four bytes aligned, which will cause the start address of struct _mic_vring_info is not four byte aligned. For example, when vring entries is 128, the vring_size() will be 5126 bytes. The _mic_vring_info struct layout in ddr looks like: 0x90002400: 00000000 00390000 EE010000 0000C0FF Here 0x39 is the avail_idx member, and 0xC0FFEE01 is the magic member. When EP use ioread32(magic) to reads the magic in RC's share memory, it will cause kernel panic on ARM64 platform due to the cross-byte io read. Here read magic in user space use le32toh(vr0->info->magic) will meet the same issue. So add round_up(x,4) for vring_size, then the struct _mic_vring_info will store in this way: 0x90002400: 00000000 00000000 00000039 C0FFEE01 Which will avoid kernel panic when read magic in struct _mic_vring_info. Signed-off-by:
Sherry Sun <sherry.sun@nxp.com> Signed-off-by:
Joakim Zhang <qiangqing.zhang@nxp.com> Link: https://lore.kernel.org/r/20200929091106.24624-4-sherry.sun@nxp.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sherry Sun authored
If enable DEBUG, will meet the following errors when build mpssd, so fix them here. Only one error is listed here, other errors are similar. mpssd.c: In function ‘virtio_net’: mpssd.c:615:21: error: incompatible type for argument 2 of ‘disp_iovec’ disp_iovec(mic, copy, __func__, __LINE__); ^~~~ mpssd.c:361:1: note: expected ‘struct mic_copy_desc *’ but argument is of type ‘struct mic_copy_desc’ disp_iovec(struct mic_info *mic, struct mic_copy_desc *copy, ^~~~~~~~~~ Signed-off-by:
Sherry Sun <sherry.sun@nxp.com> Link: https://lore.kernel.org/r/20200925071831.8025-2-sherry.sun@nxp.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Sep 30, 2020
-
-
Daniel Borkmann authored
For those locations where we use an immediate tail call map index use the newly added bpf_tail_call_static() helper. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/3cfb2b799a62d22c6e7ae5897c23940bdcc24cbc.1601477936.git.daniel@iogearbox.net
-
- Sep 22, 2020
-
-
Andra Paraschiv authored
Add a user space sample for the usage of the ioctl interface provided by the Nitro Enclaves driver. Changelog v9 -> v10 * Update commit message to include the changelog before the SoB tag(s). v8 -> v9 * No changes. v7 -> v8 * Track NE custom error codes for invalid page size, invalid flags and enclave CID. * Update the heartbeat logic to have a listener fd first, then start the enclave and then accept connection to get the heartbeat. * Update the reference link to the hugetlb documentation. v6 -> v7 * Track POLLNVAL as poll event in addition to POLLHUP. v5 -> v6 * Remove "rc" mentioning when printing errno string. * Remove the ioctl to query API version. * Include usage info for NUMA-aware hugetlb configuration. * Update documentation to kernel-doc format. * Add logic for enclave image loading. v4 -> v5 * Print enclave vCPU ids when they are created. * Update logic to map the modified vCPU ioctl call. * Add check for the path to the enclave image to be less than PATH_MAX. * Update the ioctl calls error checking logic to match the NE specific error codes. v3 -> v4 * Update usage details to match the updates in v4. * Update NE ioctl interface usage. v2 -> v3 * Remove the include directory to use the uapi from the kernel. * Remove the GPL additional wording as SPDX-License-Identifier is already in place. v1 -> v2 * New in v2. Reviewed-by:
Alexander Graf <graf@amazon.com> Signed-off-by:
Alexandru Vasile <lexnv@amazon.com> Signed-off-by:
Andra Paraschiv <andraprs@amazon.com> Link: https://lore.kernel.org/r/20200921121732.44291-17-andraprs@amazon.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Sep 18, 2020
-
-
Ilya Leoshkevich authored
s390 uses socketcall multiplexer instead of individual socket syscalls. Therefore, "kprobe/" SYSCALL(sys_connect) does not trigger and test_map_in_map fails. Fix by using "kprobe/__sys_connect" instead. Signed-off-by:
Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200915115519.3769807-1-iii@linux.ibm.com
-
- Sep 15, 2020
-
-
Magnus Karlsson authored
Add a quiet option (-Q) that disables the statistics print outs of xdpsock. This is good to have when measuring 0% loss rate performance as it will be quite terrible if the application uses printfs. Signed-off-by:
Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1599726666-8431-4-git-send-email-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Fix a possible deadlock in the l2fwd application in xdpsock that can occur when there is no space in the Tx ring. There are two ways to get the kernel to consume entries in the Tx ring: calling sendto() to make it send packets and freeing entries from the completion ring, as the kernel will not send a packet if there is no space for it to add a completion entry in the completion ring. The Tx loop in l2fwd only used to call sendto(). This patches adds cleaning the completion ring in that loop. Signed-off-by:
Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1599726666-8431-3-git-send-email-magnus.karlsson@gmail.com
-
Magnus Karlsson authored
Fix the sending of a single packet (or small burst) in xdpsock when executing in copy mode. Currently, the l2fwd application in xdpsock only transmits the packets after a batch of them has been received, which might be confusing if you only send one packet and expect that it is returned pronto. Fix this by calling sendto() more often and add a comment in the code that states that this can be optimized if needed. Reported-by:
Tirthendu Sarkar <tirthendu.sarkar@intel.com> Signed-off-by:
Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1599726666-8431-2-git-send-email-magnus.karlsson@gmail.com
-
- Sep 10, 2020
-
-
Mauro Carvalho Chehab authored
This patch was moved out of staging. Fixes: 2165b82f ("docs: Move kprobes.rst from staging/ to trace/") Signed-off-by:
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Link: https://lore.kernel.org/r/a6d4c62e19ab1510789418a3a5ad42980cd7ae3a.1599660067.git.mchehab+huawei@kernel.org Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-