-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
netdev CI testing #6666
Open
kuba-moo
wants to merge
1,690
commits into
kernel-patches:bpf-next_base
Choose a base branch
from
linux-netdev:to-test
base: bpf-next_base
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
netdev CI testing #6666
+51,903
−16,013
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kernel-patches-daemon-bpf
bot
force-pushed
the
bpf-next_base
branch
3 times, most recently
from
March 28, 2024 04:46
4f22ee0
to
8a9a8e0
Compare
kuba-moo
force-pushed
the
to-test
branch
11 times, most recently
from
March 29, 2024 00:01
64c403f
to
8da1f58
Compare
kernel-patches-daemon-bpf
bot
force-pushed
the
bpf-next_base
branch
3 times, most recently
from
March 29, 2024 02:14
78ebb17
to
9325308
Compare
kuba-moo
force-pushed
the
to-test
branch
6 times, most recently
from
March 29, 2024 18:01
c8c7b2f
to
a71aae6
Compare
kernel-patches-daemon-bpf
bot
force-pushed
the
bpf-next_base
branch
from
March 29, 2024 18:12
9325308
to
7940ae1
Compare
kuba-moo
force-pushed
the
to-test
branch
2 times, most recently
from
March 30, 2024 00:01
d8feb00
to
b16a6b9
Compare
kernel-patches-daemon-bpf
bot
force-pushed
the
bpf-next_base
branch
from
March 30, 2024 00:21
7940ae1
to
8f1ff3c
Compare
kuba-moo
force-pushed
the
to-test
branch
2 times, most recently
from
March 30, 2024 06:00
4164329
to
c5cecb3
Compare
I don't see any reason why napi_enable() needs to be under the lock, only reason I could think of is if the IRQ also took this lock but it doesn't. napi_enable() will soon need to sleep. Signed-off-by: Jakub Kicinski <[email protected]> Acked-by: Vincent Mailhol <[email protected]> Reviewed-by: Francois Romieu <[email protected]> Signed-off-by: NipaLocal <nipa@local>
iavf uses the netdev->lock already to protect shapers. In an upcoming series we'll try to protect NAPI instances with netdev->lock. We need to modify the protection a bit. All NAPI related calls in the driver need to be consistently under the lock. This will allow us to easily switch to a "we already hold the lock" NAPI API later. register_netdevice(), OTOH, must not be called under the netdev_lock() as we do not intend to have an "already locked" version of this call. Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Two different models of usb card, the drivers are r8152 and asix. If no network cable is connected, Speed = 10Mb/s. This problem is repeated in linux 3.10, 4.19, and 5.4. Both drivers call mii_ethtool_get_link_ksettings, but the value of cmd->base.speed in this function can only be SPEED_1000 or SPEED_100 or SPEED_10. When the network cable is not connected, set cmd->base.speed =SPEED_UNKNOWN. Signed-off-by: Xiangqian Zhang <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Haowei Yan <[email protected]> found that ets_class_from_arg() can index an Out-Of-Bound class in ets_class_from_arg() when passed clid of 0. The overflow may cause local privilege escalation. [ 18.852298] ------------[ cut here ]------------ [ 18.853271] UBSAN: array-index-out-of-bounds in net/sched/sch_ets.c:93:20 [ 18.853743] index 18446744073709551615 is out of range for type 'ets_class [16]' [ 18.854254] CPU: 0 UID: 0 PID: 1275 Comm: poc Not tainted 6.12.6-dirty kernel-patches#17 [ 18.854821] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 [ 18.856532] Call Trace: [ 18.857441] <TASK> [ 18.858227] dump_stack_lvl+0xc2/0xf0 [ 18.859607] dump_stack+0x10/0x20 [ 18.860908] __ubsan_handle_out_of_bounds+0xa7/0xf0 [ 18.864022] ets_class_change+0x3d6/0x3f0 [ 18.864322] tc_ctl_tclass+0x251/0x910 [ 18.864587] ? lock_acquire+0x5e/0x140 [ 18.865113] ? __mutex_lock+0x9c/0xe70 [ 18.866009] ? __mutex_lock+0xa34/0xe70 [ 18.866401] rtnetlink_rcv_msg+0x170/0x6f0 [ 18.866806] ? __lock_acquire+0x578/0xc10 [ 18.867184] ? __pfx_rtnetlink_rcv_msg+0x10/0x10 [ 18.867503] netlink_rcv_skb+0x59/0x110 [ 18.867776] rtnetlink_rcv+0x15/0x30 [ 18.868159] netlink_unicast+0x1c3/0x2b0 [ 18.868440] netlink_sendmsg+0x239/0x4b0 [ 18.868721] ____sys_sendmsg+0x3e2/0x410 [ 18.869012] ___sys_sendmsg+0x88/0xe0 [ 18.869276] ? rseq_ip_fixup+0x198/0x260 [ 18.869563] ? rseq_update_cpu_node_id+0x10a/0x190 [ 18.869900] ? trace_hardirqs_off+0x5a/0xd0 [ 18.870196] ? syscall_exit_to_user_mode+0xcc/0x220 [ 18.870547] ? do_syscall_64+0x93/0x150 [ 18.870821] ? __memcg_slab_free_hook+0x69/0x290 [ 18.871157] __sys_sendmsg+0x69/0xd0 [ 18.871416] __x64_sys_sendmsg+0x1d/0x30 [ 18.871699] x64_sys_call+0x9e2/0x2670 [ 18.871979] do_syscall_64+0x87/0x150 [ 18.873280] ? do_syscall_64+0x93/0x150 [ 18.874742] ? lock_release+0x7b/0x160 [ 18.876157] ? do_user_addr_fault+0x5ce/0x8f0 [ 18.877833] ? irqentry_exit_to_user_mode+0xc2/0x210 [ 18.879608] ? irqentry_exit+0x77/0xb0 [ 18.879808] ? clear_bhb_loop+0x15/0x70 [ 18.880023] ? clear_bhb_loop+0x15/0x70 [ 18.880223] ? clear_bhb_loop+0x15/0x70 [ 18.880426] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 18.880683] RIP: 0033:0x44a957 [ 18.880851] Code: ff ff e8 fc 00 00 00 66 2e 0f 1f 84 00 00 00 00 00 66 90 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 8974 24 10 [ 18.881766] RSP: 002b:00007ffcdd00fad8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e [ 18.882149] RAX: ffffffffffffffda RBX: 00007ffcdd010db8 RCX: 000000000044a957 [ 18.882507] RDX: 0000000000000000 RSI: 00007ffcdd00fb70 RDI: 0000000000000003 [ 18.885037] RBP: 00007ffcdd010bc0 R08: 000000000703c770 R09: 000000000703c7c0 [ 18.887203] R10: 0000000000000080 R11: 0000000000000246 R12: 0000000000000001 [ 18.888026] R13: 00007ffcdd010da8 R14: 00000000004ca7d0 R15: 0000000000000001 [ 18.888395] </TASK> [ 18.888610] ---[ end trace ]--- Fixes: dcc68b4 ("net: sch_ets: Add a new Qdisc") Reported-by: Haowei Yan <[email protected]> Suggested-by: Haowei Yan <[email protected]> Signed-off-by: Jamal Hadi Salim <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Lion Ackermann was able to create a UAF which can be abused for privilege escalation with the following script Step 1. create root qdisc tc qdisc add dev lo root handle 1:0 drr step2. a class for packet aggregation do demonstrate uaf tc class add dev lo classid 1:1 drr step3. a class for nesting tc class add dev lo classid 1:2 drr step4. a class to graft qdisc to tc class add dev lo classid 1:3 drr step5. tc qdisc add dev lo parent 1:1 handle 2:0 plug limit 1024 step6. tc qdisc add dev lo parent 1:2 handle 3:0 drr step7. tc class add dev lo classid 3:1 drr step 8. tc qdisc add dev lo parent 3:1 handle 4:0 pfifo step 9. Display the class/qdisc layout tc class ls dev lo class drr 1:1 root leaf 2: quantum 64Kb class drr 1:2 root leaf 3: quantum 64Kb class drr 3:1 root leaf 4: quantum 64Kb tc qdisc ls qdisc drr 1: dev lo root refcnt 2 qdisc plug 2: dev lo parent 1:1 qdisc pfifo 4: dev lo parent 3:1 limit 1000p qdisc drr 3: dev lo parent 1:2 step10. trigger the bug <=== prevented by this patch tc qdisc replace dev lo parent 1:3 handle 4:0 step 11. Redisplay again the qdiscs/classes tc class ls dev lo class drr 1:1 root leaf 2: quantum 64Kb class drr 1:2 root leaf 3: quantum 64Kb class drr 1:3 root leaf 4: quantum 64Kb class drr 3:1 root leaf 4: quantum 64Kb tc qdisc ls qdisc drr 1: dev lo root refcnt 2 qdisc plug 2: dev lo parent 1:1 qdisc pfifo 4: dev lo parent 3:1 refcnt 2 limit 1000p qdisc drr 3: dev lo parent 1:2 Observe that a) parent for 4:0 does not change despite the replace request. There can only be one parent. b) refcount has gone up by two for 4:0 and c) both class 1:3 and 3:1 are pointing to it. Step 12. send one packet to plug echo "" | socat -u STDIN UDP4-DATAGRAM:127.0.0.1:8888,priority=$((0x10001)) step13. send one packet to the grafted fifo echo "" | socat -u STDIN UDP4-DATAGRAM:127.0.0.1:8888,priority=$((0x10003)) step14. lets trigger the uaf tc class delete dev lo classid 1:3 tc class delete dev lo classid 1:1 The semantics of "replace" is for a del/add _on the same node_ and not a delete from one node(3:1) and add to another node (1:3) as in step10. While we could "fix" with a more complex approach there could be consequences to expectations so the patch takes the preventive approach of "disallow such config". Joint work with Lion Ackermann <[email protected]> Fixes: 1da177e ("Linux-2.6.12-rc2") Signed-off-by: Jamal Hadi Salim <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Add a --family option to ynl to specify the spec by family name instead of file path, with support for searching in-tree and system install location and a --list-families option to show the available families. ./tools/net/ynl/pyynl/cli.py --family rt_addr --dump getaddr Signed-off-by: Donald Hunter <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Replace hard-coded paths for spec and schema with lookup functions so that ethtool.py will work in-tree or when installed. Signed-off-by: Donald Hunter <[email protected]> Acked-by: Jakub Kicinski <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Because of patch[1] the graft behaviour changed So the command: tcq replace parent 100:1 handle 204: Is no longer valid and will not delete 100:4 added by command: tcq replace parent 100:4 handle 204: pfifo_fast So to maintain the original behaviour, this patch manually deletes 100:4 and grafts 100:1 Note: This change will also work fine without [1] [1] https://lore.kernel.org/netdev/[email protected]/T/#u Signed-off-by: Victor Nogueira <[email protected]> Reviewed-by: Jamal Hadi Salim <[email protected]> Signed-off-by: NipaLocal <nipa@local>
The genmask parameter is not used within the nf_tables_addchain function body. It should be removed to simplify the function parameter list. Signed-off-by: tuqiang <[email protected]> Signed-off-by: Jiang Kun <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Reading is very slow because ->start() performs a linear re-scan of the entire hash table until it finds the successor to the last dumped element. The current implementation uses 'pos' as the 'number of elements to skip, then does linear iteration until it has skipped 'pos' entries. Store the last bucket and the number of elements to skip in that bucket instead, so we can resume from bucket b directly. before this patch, its possible to read ~35k entries in one second, but each read() gets slower as the number of entries to skip grows: time timeout 60 cat /proc/net/ip_vs_conn > /tmp/all; wc -l /tmp/all real 1m0.007s user 0m0.003s sys 0m59.956s 140386 /tmp/all Only ~100k more got read in remaining the remaining 59s, and did not get nowhere near the 1m entries that are stored at the time. after this patch, dump completes very quickly: time cat /proc/net/ip_vs_conn > /tmp/all; wc -l /tmp/all real 0m2.286s user 0m0.004s sys 0m2.281s 1000001 /tmp/all Signed-off-by: Florian Westphal <[email protected]> Acked-by: Julian Anastasov <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: NipaLocal <nipa@local>
I have seen syzbot reports hinting at xt_hashlimit abuse: [ 105.783066][ T4331] xt_hashlimit: max too large, truncated to 1048576 [ 105.811405][ T4331] xt_hashlimit: size too large, truncated to 1048576 And worker threads using up to 1 second per htable_selective_cleanup() invocation. [ 269.734496][ C1] [<ffffffff81547180>] ? __local_bh_enable_ip+0x1a0/0x1a0 [ 269.734513][ C1] [<ffffffff817d75d0>] ? lockdep_hardirqs_on_prepare+0x740/0x740 [ 269.734533][ C1] [<ffffffff852e71ff>] ? htable_selective_cleanup+0x25f/0x310 [ 269.734549][ C1] [<ffffffff817dcd30>] ? __lock_acquire+0x2060/0x2060 [ 269.734567][ C1] [<ffffffff817f058a>] ? do_raw_spin_lock+0x14a/0x370 [ 269.734583][ C1] [<ffffffff852e71ff>] ? htable_selective_cleanup+0x25f/0x310 [ 269.734599][ C1] [<ffffffff81547147>] __local_bh_enable_ip+0x167/0x1a0 [ 269.734616][ C1] [<ffffffff81546fe0>] ? _local_bh_enable+0xa0/0xa0 [ 269.734634][ C1] [<ffffffff852e71ff>] ? htable_selective_cleanup+0x25f/0x310 [ 269.734651][ C1] [<ffffffff852e71ff>] htable_selective_cleanup+0x25f/0x310 [ 269.734670][ C1] [<ffffffff815b3cc9>] ? process_one_work+0x7a9/0x1170 [ 269.734685][ C1] [<ffffffff852e57db>] htable_gc+0x1b/0xa0 [ 269.734700][ C1] [<ffffffff815b3cc9>] ? process_one_work+0x7a9/0x1170 [ 269.734714][ C1] [<ffffffff815b3dc9>] process_one_work+0x8a9/0x1170 [ 269.734733][ C1] [<ffffffff815b3520>] ? worker_detach_from_pool+0x260/0x260 [ 269.734749][ C1] [<ffffffff810201c7>] ? _raw_spin_lock_irq+0xb7/0xf0 [ 269.734763][ C1] [<ffffffff81020110>] ? _raw_spin_lock_irqsave+0x100/0x100 [ 269.734777][ C1] [<ffffffff8159d3df>] ? wq_worker_sleeping+0x5f/0x270 [ 269.734800][ C1] [<ffffffff815b53c7>] worker_thread+0xa47/0x1200 [ 269.734815][ C1] [<ffffffff81020010>] ? _raw_spin_lock+0x40/0x40 [ 269.734835][ C1] [<ffffffff815c9f2a>] kthread+0x25a/0x2e0 [ 269.734853][ C1] [<ffffffff815b4980>] ? worker_clr_flags+0x190/0x190 [ 269.734866][ C1] [<ffffffff815c9cd0>] ? kthread_blkcg+0xd0/0xd0 [ 269.734885][ C1] [<ffffffff81027b1a>] ret_from_fork+0x3a/0x50 We can skip over empty buckets, avoiding the lockdep penalty for debug kernels, and avoid atomic operations on non debug ones. Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Nadia Pinaeva writes: I am working on a tool that allows collecting network performance metrics by using conntrack events. Start time of a conntrack entry is used to evaluate seen_reply latency, therefore the sooner it is timestamped, the better the precision is. In particular, when using this tool to compare the performance of the same feature implemented using iptables/nftables/OVS it is crucial to have the entry timestamped earlier to see any difference. At this time, conntrack events can only get timestamped at recv time in userspace, so there can be some delay between the event being generated and the userspace process consuming the message. There is sys/net/netfilter/nf_conntrack_timestamp, which adds a 64bit timestamp (ns resolution) that records start and stop times, but its not suited for this either, start time is the 'hashtable insertion time', not 'conntrack allocation time'. There is concern that moving the start-time moment to conntrack allocation will add overhead in case of flooding, where conntrack entries are allocated and released right away without getting inserted into the hashtable. Also, even if this was changed it would not with events other than new (start time) and destroy (stop time). Pablo suggested to add new CTA_TIMESTAMP_EVENT, this adds this feature. The timestamp is recorded in case both events are requested and the sys/net/netfilter/nf_conntrack_timestamp toggle is enabled. Reported-by: Nadia Pinaeva <[email protected]> Suggested-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: NipaLocal <nipa@local>
The following patches add new drop reasons starting with the SOCKET_ prefix. Let's gather the existing SOCKET_ reasons. Note that the order is not part of uAPI. Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
unix_release_sock() is called when the last refcnt of struct file is released. Let's define a new drop reason SKB_DROP_REASON_SOCKET_CLOSE and set it for kfree_skb() in unix_release_sock(). # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> s1, s2 = socketpair(AF_UNIX) >>> s1.send(b'hello world') >>> s2.close() # cat /sys/kernel/tracing/trace_pipe ... python3-280 ... kfree_skb: ... protocol=0 location=unix_release_sock+0x260/0x420 reason: SOCKET_CLOSE To be precise, unix_release_sock() is also called for a new child socket in unix_stream_connect() when something fails, but the new sk does not have skb in the recv queue then and no event is logged. Note that only tcp_inbound_ao_hash() uses a similar drop reason, SKB_DROP_REASON_TCP_CLOSE, and this can be generalised later. Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
unix_sock_destructor() is called as sk->sk_destruct() just before the socket is actually freed. Let's use SKB_DROP_REASON_SOCKET_CLOSE for skb_queue_purge(). Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Inflight file descriptors by SCM_RIGHTS hold references to the struct file. AF_UNIX sockets could hold references to each other, forming reference cycles. Once such sockets are close()d without the fd recv()ed, they will be unaccessible from userspace but remain in kernel. __unix_gc() garbage-collects skb with the dead file descriptors and frees them by __skb_queue_purge(). Let's set SKB_DROP_REASON_SOCKET_CLOSE there. # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> from array import array >>> >>> # Create a reference cycle >>> s1 = socket(AF_UNIX, SOCK_DGRAM) >>> s1.bind('') >>> s1.sendmsg([b"nop"], [(SOL_SOCKET, SCM_RIGHTS, array("i", [s1.fileno()]))], 0, s1.getsockname()) >>> s1.close() >>> >>> # Trigger GC >>> s2 = socket(AF_UNIX) >>> s2.close() # cat /sys/kernel/tracing/trace_pipe ... kworker/u16:2-42 ... kfree_skb: ... location=__unix_gc+0x4ad/0x580 reason: SOCKET_CLOSE Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
connect() to a SOCK_STREAM socket could fail for various reasons. Let's set drop reasons respectively: * NO_SOCKET : No listening socket found * RCV_SHUTDOWN : The listening socket called shutdown(SHUT_RD) * SOCKET_RCVBUFF : The listening socket's accept queue is full * INVALID_STATE : The client is in TCP_ESTABLISHED or TCP_LISTEN * SECURITY_HOOK : LSM refused connect() Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
sendmsg() to a SOCK_STREAM socket could fail for various reasons. Let's set drop reasons respectively. * NOMEM : Failed to allocate SCM_RIGHTS-related structs * UNIX_INFLIGHT_FD_LIMIT : The number of inflight fd reached RLIMIT_NOFILE * SKB_UCOPY_FAULT : Failed to copy data from iov_iter to skb * SOCKET_CLOSE : The peer socket was close()d * SOCKET_RCV_SHUTDOWN : The peer socket called shutdown(SHUT_RD) unix_scm_err_to_reason() will be reused in queue_oob() and unix_dgram_sendmsg(). While at it, size and data_len are moved to the while loop scope. Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
sendmsg(MSG_OOB) to a SOCK_STREAM socket could fail for various reasons. Let's set drop reasons respectively. The drop reasons are exactly the same as in the previous patch. Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
AF_UNIX SOCK_STREAM socket supports MSG_OOB. When OOB data is sent to a socket, recv() will break at that point. If the next recv() does not have MSG_OOB, the normal data following the OOB data is returned. Then, the OOB skb is dropped. Let's define a new drop reason for that case in manage_oob(). # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> s1, s2 = socketpair(AF_UNIX) >>> s1.send(b'a', MSG_OOB) >>> s1.send(b'b') >>> s2.recv(2) b'b' # cat /sys/kernel/tracing/trace_pipe ... python3-223 ... kfree_skb: ... location=unix_stream_read_generic+0x59e/0xc20 reason: UNIX_SKIP_OOB Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
unix_stream_read_skb() is called when BPF SOCKMAP reads some data from a socket in the map. SOCKMAP does not support MSG_OOB, and reading OOB results in a drop. Let's set drop reasons respectively. * SOCKET_CLOSE : the socket in SOCKMAP was close()d * UNIX_SKIP_OOB : OOB was read from the socket in SOCKMAP Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
unix_dgram_disconnected() is called from two places: 1. when a connect()ed socket dis-connect()s or re-connect()s to another socket 2. when sendmsg() fails because the peer socket that the client has connect()ed to has been close()d Then, the client's recv queue is purged to remove all messages from the old peer socket. Let's define a new drop reason for that case. # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> >>> # s1 has a message from s2 >>> s1, s2 = socketpair(AF_UNIX, SOCK_DGRAM) >>> s2.send(b'hello world') >>> >>> # re-connect() drops the message from s2 >>> s3 = socket(AF_UNIX, SOCK_DGRAM) >>> s3.bind('') >>> s1.connect(s3.getsockname()) # cat /sys/kernel/tracing/trace_pipe python3-250 ... kfree_skb: ... location=skb_queue_purge_reason+0xdc/0x110 reason: UNIX_DISCONNECT Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
sendmsg() to a SOCK_DGRAM / SOCK_SEQPACKET socket could fail for various reasons. Let's set drop reasons respectively. * SOCKET_FILTER : BPF filter dropped the skb * UNIX_NO_PERM : the peer connect()ed to another socket To reproduce UNIX_NO_PERM: # echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable # python3 >>> from socket import * >>> s1, s2 = socketpair(AF_UNIX, SOCK_DGRAM) >>> s2.bind('') >>> s3 = socket(AF_UNIX, SOCK_DGRAM) >>> s3.sendto(b'hello world', s2.getsockname()) Traceback (most recent call last): File "<stdin>", line 1, in <module> PermissionError: [Errno 1] Operation not permitted # cat /sys/kernel/tracing/trace_pipe ... python3-196 ... kfree_skb: ... location=unix_dgram_sendmsg+0x3d1/0x970 reason: UNIX_NO_PERM Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: NipaLocal <nipa@local>
tc_actions.sh keeps hanging the forwarding tests. sdf@: tdc & tdc-dbg started intermittenly failing around Sep 25th Signed-off-by: NipaLocal <nipa@local>
Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: NipaLocal <nipa@local>
The buffers can easily get large and fail allocations. One of the networking tests running in a VM tries to run perf which occasionally ends with: perf: page allocation failure: order:6, mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0 Even tho free memory is available: free:97464 free_pcp:321 free_cma:0 Switch to kvzalloc() to make large allocations less likely to fail. Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: NipaLocal <nipa@local>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Reusable PR for hooking netdev CI to BPF testing.