forked from GrapheneOS/linux-hardened
-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where is your WIKI? #39
Comments
There isn't really one. You should just look at the commit history. |
anthraxx
pushed a commit
that referenced
this issue
Jun 22, 2020
[ Upstream commit 3b70683 ] ubsan report this warning, fix it by adding a unsigned suffix. UBSAN: signed-integer-overflow in drivers/net/ethernet/intel/ixgbe/ixgbe_common.c:2246:26 65535 * 65537 cannot be represented in type 'int' CPU: 21 PID: 7 Comm: kworker/u256:0 Not tainted 5.7.0-rc3-debug+ #39 Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 2280-V2 03/27/2020 Workqueue: ixgbe ixgbe_service_task [ixgbe] Call trace: dump_backtrace+0x0/0x3f0 show_stack+0x28/0x38 dump_stack+0x154/0x1e4 ubsan_epilogue+0x18/0x60 handle_overflow+0xf8/0x148 __ubsan_handle_mul_overflow+0x34/0x48 ixgbe_fc_enable_generic+0x4d0/0x590 [ixgbe] ixgbe_service_task+0xc20/0x1f78 [ixgbe] process_one_work+0x8f0/0xf18 worker_thread+0x430/0x6d0 kthread+0x218/0x238 ret_from_fork+0x10/0x18 Reported-by: Hulk Robot <[email protected]> Signed-off-by: Xie XiuQi <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Jun 22, 2020
[ Upstream commit 3b70683 ] ubsan report this warning, fix it by adding a unsigned suffix. UBSAN: signed-integer-overflow in drivers/net/ethernet/intel/ixgbe/ixgbe_common.c:2246:26 65535 * 65537 cannot be represented in type 'int' CPU: 21 PID: 7 Comm: kworker/u256:0 Not tainted 5.7.0-rc3-debug+ #39 Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 2280-V2 03/27/2020 Workqueue: ixgbe ixgbe_service_task [ixgbe] Call trace: dump_backtrace+0x0/0x3f0 show_stack+0x28/0x38 dump_stack+0x154/0x1e4 ubsan_epilogue+0x18/0x60 handle_overflow+0xf8/0x148 __ubsan_handle_mul_overflow+0x34/0x48 ixgbe_fc_enable_generic+0x4d0/0x590 [ixgbe] ixgbe_service_task+0xc20/0x1f78 [ixgbe] process_one_work+0x8f0/0xf18 worker_thread+0x430/0x6d0 kthread+0x218/0x238 ret_from_fork+0x10/0x18 Reported-by: Hulk Robot <[email protected]> Signed-off-by: Xie XiuQi <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Jun 22, 2020
[ Upstream commit 3b70683 ] ubsan report this warning, fix it by adding a unsigned suffix. UBSAN: signed-integer-overflow in drivers/net/ethernet/intel/ixgbe/ixgbe_common.c:2246:26 65535 * 65537 cannot be represented in type 'int' CPU: 21 PID: 7 Comm: kworker/u256:0 Not tainted 5.7.0-rc3-debug+ #39 Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 2280-V2 03/27/2020 Workqueue: ixgbe ixgbe_service_task [ixgbe] Call trace: dump_backtrace+0x0/0x3f0 show_stack+0x28/0x38 dump_stack+0x154/0x1e4 ubsan_epilogue+0x18/0x60 handle_overflow+0xf8/0x148 __ubsan_handle_mul_overflow+0x34/0x48 ixgbe_fc_enable_generic+0x4d0/0x590 [ixgbe] ixgbe_service_task+0xc20/0x1f78 [ixgbe] process_one_work+0x8f0/0xf18 worker_thread+0x430/0x6d0 kthread+0x218/0x238 ret_from_fork+0x10/0x18 Reported-by: Hulk Robot <[email protected]> Signed-off-by: Xie XiuQi <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Jul 1, 2020
[ Upstream commit 3b70683 ] ubsan report this warning, fix it by adding a unsigned suffix. UBSAN: signed-integer-overflow in drivers/net/ethernet/intel/ixgbe/ixgbe_common.c:2246:26 65535 * 65537 cannot be represented in type 'int' CPU: 21 PID: 7 Comm: kworker/u256:0 Not tainted 5.7.0-rc3-debug+ #39 Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 2280-V2 03/27/2020 Workqueue: ixgbe ixgbe_service_task [ixgbe] Call trace: dump_backtrace+0x0/0x3f0 show_stack+0x28/0x38 dump_stack+0x154/0x1e4 ubsan_epilogue+0x18/0x60 handle_overflow+0xf8/0x148 __ubsan_handle_mul_overflow+0x34/0x48 ixgbe_fc_enable_generic+0x4d0/0x590 [ixgbe] ixgbe_service_task+0xc20/0x1f78 [ixgbe] process_one_work+0x8f0/0xf18 worker_thread+0x430/0x6d0 kthread+0x218/0x238 ret_from_fork+0x10/0x18 Reported-by: Hulk Robot <[email protected]> Signed-off-by: Xie XiuQi <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
I have been looking for the same, it would be very nice to have a wiki of kernel options introduced by the patch without needing to run through the commits. |
If a wiki is made, it would also likely be worth it to add ways to contribute and wanted features. Maybe it could be kept in the “Wiki” tab in this repo? |
anthraxx
pushed a commit
that referenced
this issue
Oct 24, 2022
[ Upstream commit 4f49206 ] The following warning is displayed when the tcp6-multi-diffip11 stress test case of the LTP test suite is tested: watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [ns-tcpserver:48198] CPU: 0 PID: 48198 Comm: ns-tcpserver Kdump: loaded Not tainted 6.0.0-rc6+ #39 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 80400005 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : des3_ede_encrypt+0x27c/0x460 [libdes] lr : 0x3f sp : ffff80000ceaa1b0 x29: ffff80000ceaa1b0 x28: ffff0000df056100 x27: ffff0000e51e5280 x26: ffff80004df75030 x25: ffff0000e51e4600 x24: 000000000000003b x23: 0000000000802080 x22: 000000000000003d x21: 0000000000000038 x20: 0000000080000020 x19: 000000000000000a x18: 0000000000000033 x17: ffff0000e51e4780 x16: ffff80004e2d1448 x15: ffff80004e2d1248 x14: ffff0000e51e4680 x13: ffff80004e2d1348 x12: ffff80004e2d1548 x11: ffff80004e2d1848 x10: ffff80004e2d1648 x9 : ffff80004e2d1748 x8 : ffff80004e2d1948 x7 : 000000000bcaf83d x6 : 000000000000001b x5 : ffff80004e2d1048 x4 : 00000000761bf3bf x3 : 000000007f1dd0a3 x2 : ffff0000e51e4780 x1 : ffff0000e3b9a2f8 x0 : 00000000db44e872 Call trace: des3_ede_encrypt+0x27c/0x460 [libdes] crypto_des3_ede_encrypt+0x1c/0x30 [des_generic] crypto_cbc_encrypt+0x148/0x190 crypto_skcipher_encrypt+0x2c/0x40 crypto_authenc_encrypt+0xc8/0xfc [authenc] crypto_aead_encrypt+0x2c/0x40 echainiv_encrypt+0x144/0x1a0 [echainiv] crypto_aead_encrypt+0x2c/0x40 esp6_output_tail+0x1c8/0x5d0 [esp6] esp6_output+0x120/0x278 [esp6] xfrm_output_one+0x458/0x4ec xfrm_output_resume+0x6c/0x1f0 xfrm_output+0xac/0x4ac __xfrm6_output+0x130/0x270 xfrm6_output+0x60/0xec ip6_xmit+0x2ec/0x5bc inet6_csk_xmit+0xbc/0x10c __tcp_transmit_skb+0x460/0x8c0 tcp_write_xmit+0x348/0x890 __tcp_push_pending_frames+0x44/0x110 tcp_rcv_established+0x3c8/0x720 tcp_v6_do_rcv+0xdc/0x4a0 tcp_v6_rcv+0xc24/0xcb0 ip6_protocol_deliver_rcu+0xf0/0x574 ip6_input_finish+0x48/0x7c ip6_input+0x48/0xc0 ip6_rcv_finish+0x80/0x9c xfrm_trans_reinject+0xb0/0xf4 tasklet_action_common.constprop.0+0xf8/0x134 tasklet_action+0x30/0x3c __do_softirq+0x128/0x368 do_softirq+0xb4/0xc0 __local_bh_enable_ip+0xb0/0xb4 put_cpu_fpsimd_context+0x40/0x70 kernel_neon_end+0x20/0x40 sha1_base_do_update.constprop.0.isra.0+0x11c/0x140 [sha1_ce] sha1_ce_finup+0x94/0x110 [sha1_ce] crypto_shash_finup+0x34/0xc0 hmac_finup+0x48/0xe0 crypto_shash_finup+0x34/0xc0 shash_digest_unaligned+0x74/0x90 crypto_shash_digest+0x4c/0x9c shash_ahash_digest+0xc8/0xf0 shash_async_digest+0x28/0x34 crypto_ahash_digest+0x48/0xcc crypto_authenc_genicv+0x88/0xcc [authenc] crypto_authenc_encrypt+0xd8/0xfc [authenc] crypto_aead_encrypt+0x2c/0x40 echainiv_encrypt+0x144/0x1a0 [echainiv] crypto_aead_encrypt+0x2c/0x40 esp6_output_tail+0x1c8/0x5d0 [esp6] esp6_output+0x120/0x278 [esp6] xfrm_output_one+0x458/0x4ec xfrm_output_resume+0x6c/0x1f0 xfrm_output+0xac/0x4ac __xfrm6_output+0x130/0x270 xfrm6_output+0x60/0xec ip6_xmit+0x2ec/0x5bc inet6_csk_xmit+0xbc/0x10c __tcp_transmit_skb+0x460/0x8c0 tcp_write_xmit+0x348/0x890 __tcp_push_pending_frames+0x44/0x110 tcp_push+0xb4/0x14c tcp_sendmsg_locked+0x71c/0xb64 tcp_sendmsg+0x40/0x6c inet6_sendmsg+0x4c/0x80 sock_sendmsg+0x5c/0x6c __sys_sendto+0x128/0x15c __arm64_sys_sendto+0x30/0x40 invoke_syscall+0x50/0x120 el0_svc_common.constprop.0+0x170/0x194 do_el0_svc+0x38/0x4c el0_svc+0x28/0xe0 el0t_64_sync_handler+0xbc/0x13c el0t_64_sync+0x180/0x184 Get softirq info by bcc tool: ./softirqs -NT 10 Tracing soft irq event time... Hit Ctrl-C to end. 15:34:34 SOFTIRQ TOTAL_nsecs block 158990 timer 20030920 sched 46577080 net_rx 676746820 tasklet 9906067650 15:34:45 SOFTIRQ TOTAL_nsecs block 86100 sched 38849790 net_rx 676532470 timer 1163848790 tasklet 9409019620 15:34:55 SOFTIRQ TOTAL_nsecs sched 58078450 net_rx 475156720 timer 533832410 tasklet 9431333300 The tasklet software interrupt takes too much time. Therefore, the xfrm_trans_reinject executor is changed from tasklet to workqueue. Add add spin lock to protect the queue. This reduces the processing flow of the tcp_sendmsg function in this scenario. Fixes: acf568e ("xfrm: Reinject transport-mode packets through tasklet") Signed-off-by: Liu Jian <[email protected]> Signed-off-by: Steffen Klassert <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Oct 25, 2022
[ Upstream commit 4f49206 ] The following warning is displayed when the tcp6-multi-diffip11 stress test case of the LTP test suite is tested: watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [ns-tcpserver:48198] CPU: 0 PID: 48198 Comm: ns-tcpserver Kdump: loaded Not tainted 6.0.0-rc6+ #39 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 80400005 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : des3_ede_encrypt+0x27c/0x460 [libdes] lr : 0x3f sp : ffff80000ceaa1b0 x29: ffff80000ceaa1b0 x28: ffff0000df056100 x27: ffff0000e51e5280 x26: ffff80004df75030 x25: ffff0000e51e4600 x24: 000000000000003b x23: 0000000000802080 x22: 000000000000003d x21: 0000000000000038 x20: 0000000080000020 x19: 000000000000000a x18: 0000000000000033 x17: ffff0000e51e4780 x16: ffff80004e2d1448 x15: ffff80004e2d1248 x14: ffff0000e51e4680 x13: ffff80004e2d1348 x12: ffff80004e2d1548 x11: ffff80004e2d1848 x10: ffff80004e2d1648 x9 : ffff80004e2d1748 x8 : ffff80004e2d1948 x7 : 000000000bcaf83d x6 : 000000000000001b x5 : ffff80004e2d1048 x4 : 00000000761bf3bf x3 : 000000007f1dd0a3 x2 : ffff0000e51e4780 x1 : ffff0000e3b9a2f8 x0 : 00000000db44e872 Call trace: des3_ede_encrypt+0x27c/0x460 [libdes] crypto_des3_ede_encrypt+0x1c/0x30 [des_generic] crypto_cbc_encrypt+0x148/0x190 crypto_skcipher_encrypt+0x2c/0x40 crypto_authenc_encrypt+0xc8/0xfc [authenc] crypto_aead_encrypt+0x2c/0x40 echainiv_encrypt+0x144/0x1a0 [echainiv] crypto_aead_encrypt+0x2c/0x40 esp6_output_tail+0x1c8/0x5d0 [esp6] esp6_output+0x120/0x278 [esp6] xfrm_output_one+0x458/0x4ec xfrm_output_resume+0x6c/0x1f0 xfrm_output+0xac/0x4ac __xfrm6_output+0x130/0x270 xfrm6_output+0x60/0xec ip6_xmit+0x2ec/0x5bc inet6_csk_xmit+0xbc/0x10c __tcp_transmit_skb+0x460/0x8c0 tcp_write_xmit+0x348/0x890 __tcp_push_pending_frames+0x44/0x110 tcp_rcv_established+0x3c8/0x720 tcp_v6_do_rcv+0xdc/0x4a0 tcp_v6_rcv+0xc24/0xcb0 ip6_protocol_deliver_rcu+0xf0/0x574 ip6_input_finish+0x48/0x7c ip6_input+0x48/0xc0 ip6_rcv_finish+0x80/0x9c xfrm_trans_reinject+0xb0/0xf4 tasklet_action_common.constprop.0+0xf8/0x134 tasklet_action+0x30/0x3c __do_softirq+0x128/0x368 do_softirq+0xb4/0xc0 __local_bh_enable_ip+0xb0/0xb4 put_cpu_fpsimd_context+0x40/0x70 kernel_neon_end+0x20/0x40 sha1_base_do_update.constprop.0.isra.0+0x11c/0x140 [sha1_ce] sha1_ce_finup+0x94/0x110 [sha1_ce] crypto_shash_finup+0x34/0xc0 hmac_finup+0x48/0xe0 crypto_shash_finup+0x34/0xc0 shash_digest_unaligned+0x74/0x90 crypto_shash_digest+0x4c/0x9c shash_ahash_digest+0xc8/0xf0 shash_async_digest+0x28/0x34 crypto_ahash_digest+0x48/0xcc crypto_authenc_genicv+0x88/0xcc [authenc] crypto_authenc_encrypt+0xd8/0xfc [authenc] crypto_aead_encrypt+0x2c/0x40 echainiv_encrypt+0x144/0x1a0 [echainiv] crypto_aead_encrypt+0x2c/0x40 esp6_output_tail+0x1c8/0x5d0 [esp6] esp6_output+0x120/0x278 [esp6] xfrm_output_one+0x458/0x4ec xfrm_output_resume+0x6c/0x1f0 xfrm_output+0xac/0x4ac __xfrm6_output+0x130/0x270 xfrm6_output+0x60/0xec ip6_xmit+0x2ec/0x5bc inet6_csk_xmit+0xbc/0x10c __tcp_transmit_skb+0x460/0x8c0 tcp_write_xmit+0x348/0x890 __tcp_push_pending_frames+0x44/0x110 tcp_push+0xb4/0x14c tcp_sendmsg_locked+0x71c/0xb64 tcp_sendmsg+0x40/0x6c inet6_sendmsg+0x4c/0x80 sock_sendmsg+0x5c/0x6c __sys_sendto+0x128/0x15c __arm64_sys_sendto+0x30/0x40 invoke_syscall+0x50/0x120 el0_svc_common.constprop.0+0x170/0x194 do_el0_svc+0x38/0x4c el0_svc+0x28/0xe0 el0t_64_sync_handler+0xbc/0x13c el0t_64_sync+0x180/0x184 Get softirq info by bcc tool: ./softirqs -NT 10 Tracing soft irq event time... Hit Ctrl-C to end. 15:34:34 SOFTIRQ TOTAL_nsecs block 158990 timer 20030920 sched 46577080 net_rx 676746820 tasklet 9906067650 15:34:45 SOFTIRQ TOTAL_nsecs block 86100 sched 38849790 net_rx 676532470 timer 1163848790 tasklet 9409019620 15:34:55 SOFTIRQ TOTAL_nsecs sched 58078450 net_rx 475156720 timer 533832410 tasklet 9431333300 The tasklet software interrupt takes too much time. Therefore, the xfrm_trans_reinject executor is changed from tasklet to workqueue. Add add spin lock to protect the queue. This reduces the processing flow of the tcp_sendmsg function in this scenario. Fixes: acf568e ("xfrm: Reinject transport-mode packets through tasklet") Signed-off-by: Liu Jian <[email protected]> Signed-off-by: Steffen Klassert <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Oct 26, 2022
[ Upstream commit 4f49206 ] The following warning is displayed when the tcp6-multi-diffip11 stress test case of the LTP test suite is tested: watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [ns-tcpserver:48198] CPU: 0 PID: 48198 Comm: ns-tcpserver Kdump: loaded Not tainted 6.0.0-rc6+ #39 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 80400005 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : des3_ede_encrypt+0x27c/0x460 [libdes] lr : 0x3f sp : ffff80000ceaa1b0 x29: ffff80000ceaa1b0 x28: ffff0000df056100 x27: ffff0000e51e5280 x26: ffff80004df75030 x25: ffff0000e51e4600 x24: 000000000000003b x23: 0000000000802080 x22: 000000000000003d x21: 0000000000000038 x20: 0000000080000020 x19: 000000000000000a x18: 0000000000000033 x17: ffff0000e51e4780 x16: ffff80004e2d1448 x15: ffff80004e2d1248 x14: ffff0000e51e4680 x13: ffff80004e2d1348 x12: ffff80004e2d1548 x11: ffff80004e2d1848 x10: ffff80004e2d1648 x9 : ffff80004e2d1748 x8 : ffff80004e2d1948 x7 : 000000000bcaf83d x6 : 000000000000001b x5 : ffff80004e2d1048 x4 : 00000000761bf3bf x3 : 000000007f1dd0a3 x2 : ffff0000e51e4780 x1 : ffff0000e3b9a2f8 x0 : 00000000db44e872 Call trace: des3_ede_encrypt+0x27c/0x460 [libdes] crypto_des3_ede_encrypt+0x1c/0x30 [des_generic] crypto_cbc_encrypt+0x148/0x190 crypto_skcipher_encrypt+0x2c/0x40 crypto_authenc_encrypt+0xc8/0xfc [authenc] crypto_aead_encrypt+0x2c/0x40 echainiv_encrypt+0x144/0x1a0 [echainiv] crypto_aead_encrypt+0x2c/0x40 esp6_output_tail+0x1c8/0x5d0 [esp6] esp6_output+0x120/0x278 [esp6] xfrm_output_one+0x458/0x4ec xfrm_output_resume+0x6c/0x1f0 xfrm_output+0xac/0x4ac __xfrm6_output+0x130/0x270 xfrm6_output+0x60/0xec ip6_xmit+0x2ec/0x5bc inet6_csk_xmit+0xbc/0x10c __tcp_transmit_skb+0x460/0x8c0 tcp_write_xmit+0x348/0x890 __tcp_push_pending_frames+0x44/0x110 tcp_rcv_established+0x3c8/0x720 tcp_v6_do_rcv+0xdc/0x4a0 tcp_v6_rcv+0xc24/0xcb0 ip6_protocol_deliver_rcu+0xf0/0x574 ip6_input_finish+0x48/0x7c ip6_input+0x48/0xc0 ip6_rcv_finish+0x80/0x9c xfrm_trans_reinject+0xb0/0xf4 tasklet_action_common.constprop.0+0xf8/0x134 tasklet_action+0x30/0x3c __do_softirq+0x128/0x368 do_softirq+0xb4/0xc0 __local_bh_enable_ip+0xb0/0xb4 put_cpu_fpsimd_context+0x40/0x70 kernel_neon_end+0x20/0x40 sha1_base_do_update.constprop.0.isra.0+0x11c/0x140 [sha1_ce] sha1_ce_finup+0x94/0x110 [sha1_ce] crypto_shash_finup+0x34/0xc0 hmac_finup+0x48/0xe0 crypto_shash_finup+0x34/0xc0 shash_digest_unaligned+0x74/0x90 crypto_shash_digest+0x4c/0x9c shash_ahash_digest+0xc8/0xf0 shash_async_digest+0x28/0x34 crypto_ahash_digest+0x48/0xcc crypto_authenc_genicv+0x88/0xcc [authenc] crypto_authenc_encrypt+0xd8/0xfc [authenc] crypto_aead_encrypt+0x2c/0x40 echainiv_encrypt+0x144/0x1a0 [echainiv] crypto_aead_encrypt+0x2c/0x40 esp6_output_tail+0x1c8/0x5d0 [esp6] esp6_output+0x120/0x278 [esp6] xfrm_output_one+0x458/0x4ec xfrm_output_resume+0x6c/0x1f0 xfrm_output+0xac/0x4ac __xfrm6_output+0x130/0x270 xfrm6_output+0x60/0xec ip6_xmit+0x2ec/0x5bc inet6_csk_xmit+0xbc/0x10c __tcp_transmit_skb+0x460/0x8c0 tcp_write_xmit+0x348/0x890 __tcp_push_pending_frames+0x44/0x110 tcp_push+0xb4/0x14c tcp_sendmsg_locked+0x71c/0xb64 tcp_sendmsg+0x40/0x6c inet6_sendmsg+0x4c/0x80 sock_sendmsg+0x5c/0x6c __sys_sendto+0x128/0x15c __arm64_sys_sendto+0x30/0x40 invoke_syscall+0x50/0x120 el0_svc_common.constprop.0+0x170/0x194 do_el0_svc+0x38/0x4c el0_svc+0x28/0xe0 el0t_64_sync_handler+0xbc/0x13c el0t_64_sync+0x180/0x184 Get softirq info by bcc tool: ./softirqs -NT 10 Tracing soft irq event time... Hit Ctrl-C to end. 15:34:34 SOFTIRQ TOTAL_nsecs block 158990 timer 20030920 sched 46577080 net_rx 676746820 tasklet 9906067650 15:34:45 SOFTIRQ TOTAL_nsecs block 86100 sched 38849790 net_rx 676532470 timer 1163848790 tasklet 9409019620 15:34:55 SOFTIRQ TOTAL_nsecs sched 58078450 net_rx 475156720 timer 533832410 tasklet 9431333300 The tasklet software interrupt takes too much time. Therefore, the xfrm_trans_reinject executor is changed from tasklet to workqueue. Add add spin lock to protect the queue. This reduces the processing flow of the tcp_sendmsg function in this scenario. Fixes: acf568e ("xfrm: Reinject transport-mode packets through tasklet") Signed-off-by: Liu Jian <[email protected]> Signed-off-by: Steffen Klassert <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 2, 2024
[ Upstream commit b377add ] Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 2, 2024
[ Upstream commit b377add ] Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 2, 2024
[ Upstream commit b377add ] Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 2, 2024
[ Upstream commit b377add ] Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 2, 2024
[ Upstream commit b377add ] Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 2, 2024
[ Upstream commit b377add ] Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
May 19, 2024
Both the function that migrates all the chunks within a region and the function that migrates all the entries within a chunk call list_first_entry() on the respective lists without checking that the lists are not empty. This is incorrect usage of the API, which leads to the following warning [1]. Fix by returning if the lists are empty as there is nothing to migrate in this case. [1] WARNING: CPU: 0 PID: 6437 at drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c:1266 mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0> Modules linked in: CPU: 0 PID: 6437 Comm: kworker/0:37 Not tainted 6.9.0-rc3-custom-00883-g94a65f079ef6 #39 Hardware name: Mellanox Technologies Ltd. MSN3700/VMOD0005, BIOS 5.11 01/06/2019 Workqueue: mlxsw_core mlxsw_sp_acl_tcam_vregion_rehash_work RIP: 0010:mlxsw_sp_acl_tcam_vchunk_migrate_all+0x1f1/0x2c0 [...] Call Trace: <TASK> mlxsw_sp_acl_tcam_vregion_rehash_work+0x6c/0x4a0 process_one_work+0x151/0x370 worker_thread+0x2cb/0x3e0 kthread+0xd0/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Fixes: 6f9579d ("mlxsw: spectrum_acl: Remember where to continue rehash migration") Signed-off-by: Ido Schimmel <[email protected]> Tested-by: Alexander Zubkov <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: Petr Machata <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/4628e9a22d1d84818e28310abbbc498e7bc31bc9.1713797103.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Jun 28, 2024
[ Upstream commit f6944d4 ] Lockdep reports the below circular locking dependency issue. The mmap_lock acquisition while holding pci_bus_sem is due to the use of copy_to_user() from within a pci_walk_bus() callback. Building the devices array directly into the user buffer is only for convenience. Instead we can allocate a local buffer for the array, bounded by the number of devices on the bus/slot, fill the device information into this local buffer, then copy it into the user buffer outside the bus walk callback. ====================================================== WARNING: possible circular locking dependency detected 6.9.0-rc5+ #39 Not tainted ------------------------------------------------------ CPU 0/KVM/4113 is trying to acquire lock: ffff99a609ee18a8 (&vdev->vma_lock){+.+.}-{4:4}, at: vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] but task is already holding lock: ffff99a243a052a0 (&mm->mmap_lock){++++}-{4:4}, at: vaddr_get_pfns+0x3f/0x170 [vfio_iommu_type1] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&mm->mmap_lock){++++}-{4:4}: __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 __might_fault+0x5c/0x80 _copy_to_user+0x1e/0x60 vfio_pci_fill_devs+0x9f/0x130 [vfio_pci_core] vfio_pci_walk_wrapper+0x45/0x60 [vfio_pci_core] __pci_walk_bus+0x6b/0xb0 vfio_pci_ioctl_get_pci_hot_reset_info+0x10b/0x1d0 [vfio_pci_core] vfio_pci_core_ioctl+0x1cb/0x400 [vfio_pci_core] vfio_device_fops_unl_ioctl+0x7e/0x140 [vfio] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (pci_bus_sem){++++}-{4:4}: __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 down_read+0x3e/0x160 pci_bridge_wait_for_secondary_bus.part.0+0x33/0x2d0 pci_reset_bus+0xdd/0x160 vfio_pci_dev_set_hot_reset+0x256/0x270 [vfio_pci_core] vfio_pci_ioctl_pci_hot_reset_groups+0x1a3/0x280 [vfio_pci_core] vfio_pci_core_ioctl+0x3b5/0x400 [vfio_pci_core] vfio_device_fops_unl_ioctl+0x7e/0x140 [vfio] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&vdev->memory_lock){+.+.}-{4:4}: __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 down_write+0x3b/0xc0 vfio_pci_zap_and_down_write_memory_lock+0x1c/0x30 [vfio_pci_core] vfio_basic_config_write+0x281/0x340 [vfio_pci_core] vfio_config_do_rw+0x1fa/0x300 [vfio_pci_core] vfio_pci_config_rw+0x75/0xe50 [vfio_pci_core] vfio_pci_rw+0xea/0x1a0 [vfio_pci_core] vfs_write+0xea/0x520 __x64_sys_pwrite64+0x90/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&vdev->vma_lock){+.+.}-{4:4}: check_prev_add+0xeb/0xcc0 validate_chain+0x465/0x530 __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 __mutex_lock+0x97/0xde0 vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] __do_fault+0x31/0x160 do_pte_missing+0x65/0x3b0 __handle_mm_fault+0x303/0x720 handle_mm_fault+0x10f/0x460 fixup_user_fault+0x7f/0x1f0 follow_fault_pfn+0x66/0x1c0 [vfio_iommu_type1] vaddr_get_pfns+0xf2/0x170 [vfio_iommu_type1] vfio_pin_pages_remote+0x348/0x4e0 [vfio_iommu_type1] vfio_pin_map_dma+0xd2/0x330 [vfio_iommu_type1] vfio_dma_do_map+0x2c0/0x440 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0xc5/0x1d0 [vfio_iommu_type1] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e other info that might help us debug this: Chain exists of: &vdev->vma_lock --> pci_bus_sem --> &mm->mmap_lock Possible unsafe locking scenario: block dm-0: the capability attribute has been deprecated. CPU0 CPU1 ---- ---- rlock(&mm->mmap_lock); lock(pci_bus_sem); lock(&mm->mmap_lock); lock(&vdev->vma_lock); *** DEADLOCK *** 2 locks held by CPU 0/KVM/4113: #0: ffff99a25f294888 (&iommu->lock#2){+.+.}-{4:4}, at: vfio_dma_do_map+0x60/0x440 [vfio_iommu_type1] #1: ffff99a243a052a0 (&mm->mmap_lock){++++}-{4:4}, at: vaddr_get_pfns+0x3f/0x170 [vfio_iommu_type1] stack backtrace: CPU: 1 PID: 4113 Comm: CPU 0/KVM Not tainted 6.9.0-rc5+ #39 Hardware name: Dell Inc. PowerEdge T640/04WYPY, BIOS 2.15.1 06/16/2022 Call Trace: <TASK> dump_stack_lvl+0x64/0xa0 check_noncircular+0x131/0x150 check_prev_add+0xeb/0xcc0 ? add_chain_cache+0x10a/0x2f0 ? __lock_acquire+0x4e4/0xb90 validate_chain+0x465/0x530 __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] ? lock_is_held_type+0x9a/0x110 __mutex_lock+0x97/0xde0 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] ? lock_acquire+0xbc/0x2d0 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] ? find_held_lock+0x2b/0x80 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] __do_fault+0x31/0x160 do_pte_missing+0x65/0x3b0 __handle_mm_fault+0x303/0x720 handle_mm_fault+0x10f/0x460 fixup_user_fault+0x7f/0x1f0 follow_fault_pfn+0x66/0x1c0 [vfio_iommu_type1] vaddr_get_pfns+0xf2/0x170 [vfio_iommu_type1] vfio_pin_pages_remote+0x348/0x4e0 [vfio_iommu_type1] vfio_pin_map_dma+0xd2/0x330 [vfio_iommu_type1] vfio_dma_do_map+0x2c0/0x440 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0xc5/0x1d0 [vfio_iommu_type1] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 ? rcu_core+0x8d/0x250 ? __lock_release+0x5e/0x160 ? rcu_core+0x8d/0x250 ? lock_release+0x5f/0x120 ? sched_clock+0xc/0x30 ? sched_clock_cpu+0xb/0x190 ? irqtime_account_irq+0x40/0xc0 ? __local_bh_enable+0x54/0x60 ? __do_softirq+0x315/0x3ca ? lockdep_hardirqs_on_prepare.part.0+0x97/0x140 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f8300d0357b Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 75 68 0f 00 f7 d8 64 89 01 48 RSP: 002b:00007f82ef3fb948 EFLAGS: 00000206 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8300d0357b RDX: 00007f82ef3fb990 RSI: 0000000000003b71 RDI: 0000000000000023 RBP: 00007f82ef3fb9c0 R08: 0000000000000000 R09: 0000561b7e0bcac2 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000 R13: 0000000200000000 R14: 0000381800000000 R15: 0000000000000000 </TASK> Reviewed-by: Jason Gunthorpe <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alex Williamson <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
anthraxx
pushed a commit
that referenced
this issue
Jun 28, 2024
[ Upstream commit f6944d4 ] Lockdep reports the below circular locking dependency issue. The mmap_lock acquisition while holding pci_bus_sem is due to the use of copy_to_user() from within a pci_walk_bus() callback. Building the devices array directly into the user buffer is only for convenience. Instead we can allocate a local buffer for the array, bounded by the number of devices on the bus/slot, fill the device information into this local buffer, then copy it into the user buffer outside the bus walk callback. ====================================================== WARNING: possible circular locking dependency detected 6.9.0-rc5+ #39 Not tainted ------------------------------------------------------ CPU 0/KVM/4113 is trying to acquire lock: ffff99a609ee18a8 (&vdev->vma_lock){+.+.}-{4:4}, at: vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] but task is already holding lock: ffff99a243a052a0 (&mm->mmap_lock){++++}-{4:4}, at: vaddr_get_pfns+0x3f/0x170 [vfio_iommu_type1] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&mm->mmap_lock){++++}-{4:4}: __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 __might_fault+0x5c/0x80 _copy_to_user+0x1e/0x60 vfio_pci_fill_devs+0x9f/0x130 [vfio_pci_core] vfio_pci_walk_wrapper+0x45/0x60 [vfio_pci_core] __pci_walk_bus+0x6b/0xb0 vfio_pci_ioctl_get_pci_hot_reset_info+0x10b/0x1d0 [vfio_pci_core] vfio_pci_core_ioctl+0x1cb/0x400 [vfio_pci_core] vfio_device_fops_unl_ioctl+0x7e/0x140 [vfio] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (pci_bus_sem){++++}-{4:4}: __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 down_read+0x3e/0x160 pci_bridge_wait_for_secondary_bus.part.0+0x33/0x2d0 pci_reset_bus+0xdd/0x160 vfio_pci_dev_set_hot_reset+0x256/0x270 [vfio_pci_core] vfio_pci_ioctl_pci_hot_reset_groups+0x1a3/0x280 [vfio_pci_core] vfio_pci_core_ioctl+0x3b5/0x400 [vfio_pci_core] vfio_device_fops_unl_ioctl+0x7e/0x140 [vfio] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&vdev->memory_lock){+.+.}-{4:4}: __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 down_write+0x3b/0xc0 vfio_pci_zap_and_down_write_memory_lock+0x1c/0x30 [vfio_pci_core] vfio_basic_config_write+0x281/0x340 [vfio_pci_core] vfio_config_do_rw+0x1fa/0x300 [vfio_pci_core] vfio_pci_config_rw+0x75/0xe50 [vfio_pci_core] vfio_pci_rw+0xea/0x1a0 [vfio_pci_core] vfs_write+0xea/0x520 __x64_sys_pwrite64+0x90/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&vdev->vma_lock){+.+.}-{4:4}: check_prev_add+0xeb/0xcc0 validate_chain+0x465/0x530 __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 __mutex_lock+0x97/0xde0 vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] __do_fault+0x31/0x160 do_pte_missing+0x65/0x3b0 __handle_mm_fault+0x303/0x720 handle_mm_fault+0x10f/0x460 fixup_user_fault+0x7f/0x1f0 follow_fault_pfn+0x66/0x1c0 [vfio_iommu_type1] vaddr_get_pfns+0xf2/0x170 [vfio_iommu_type1] vfio_pin_pages_remote+0x348/0x4e0 [vfio_iommu_type1] vfio_pin_map_dma+0xd2/0x330 [vfio_iommu_type1] vfio_dma_do_map+0x2c0/0x440 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0xc5/0x1d0 [vfio_iommu_type1] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 entry_SYSCALL_64_after_hwframe+0x76/0x7e other info that might help us debug this: Chain exists of: &vdev->vma_lock --> pci_bus_sem --> &mm->mmap_lock Possible unsafe locking scenario: block dm-0: the capability attribute has been deprecated. CPU0 CPU1 ---- ---- rlock(&mm->mmap_lock); lock(pci_bus_sem); lock(&mm->mmap_lock); lock(&vdev->vma_lock); *** DEADLOCK *** 2 locks held by CPU 0/KVM/4113: #0: ffff99a25f294888 (&iommu->lock#2){+.+.}-{4:4}, at: vfio_dma_do_map+0x60/0x440 [vfio_iommu_type1] #1: ffff99a243a052a0 (&mm->mmap_lock){++++}-{4:4}, at: vaddr_get_pfns+0x3f/0x170 [vfio_iommu_type1] stack backtrace: CPU: 1 PID: 4113 Comm: CPU 0/KVM Not tainted 6.9.0-rc5+ #39 Hardware name: Dell Inc. PowerEdge T640/04WYPY, BIOS 2.15.1 06/16/2022 Call Trace: <TASK> dump_stack_lvl+0x64/0xa0 check_noncircular+0x131/0x150 check_prev_add+0xeb/0xcc0 ? add_chain_cache+0x10a/0x2f0 ? __lock_acquire+0x4e4/0xb90 validate_chain+0x465/0x530 __lock_acquire+0x4e4/0xb90 lock_acquire+0xbc/0x2d0 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] ? lock_is_held_type+0x9a/0x110 __mutex_lock+0x97/0xde0 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] ? lock_acquire+0xbc/0x2d0 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] ? find_held_lock+0x2b/0x80 ? vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] vfio_pci_mmap_fault+0x35/0x1a0 [vfio_pci_core] __do_fault+0x31/0x160 do_pte_missing+0x65/0x3b0 __handle_mm_fault+0x303/0x720 handle_mm_fault+0x10f/0x460 fixup_user_fault+0x7f/0x1f0 follow_fault_pfn+0x66/0x1c0 [vfio_iommu_type1] vaddr_get_pfns+0xf2/0x170 [vfio_iommu_type1] vfio_pin_pages_remote+0x348/0x4e0 [vfio_iommu_type1] vfio_pin_map_dma+0xd2/0x330 [vfio_iommu_type1] vfio_dma_do_map+0x2c0/0x440 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0xc5/0x1d0 [vfio_iommu_type1] __x64_sys_ioctl+0x8a/0xc0 do_syscall_64+0x8d/0x170 ? rcu_core+0x8d/0x250 ? __lock_release+0x5e/0x160 ? rcu_core+0x8d/0x250 ? lock_release+0x5f/0x120 ? sched_clock+0xc/0x30 ? sched_clock_cpu+0xb/0x190 ? irqtime_account_irq+0x40/0xc0 ? __local_bh_enable+0x54/0x60 ? __do_softirq+0x315/0x3ca ? lockdep_hardirqs_on_prepare.part.0+0x97/0x140 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f8300d0357b Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 75 68 0f 00 f7 d8 64 89 01 48 RSP: 002b:00007f82ef3fb948 EFLAGS: 00000206 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8300d0357b RDX: 00007f82ef3fb990 RSI: 0000000000003b71 RDI: 0000000000000023 RBP: 00007f82ef3fb9c0 R08: 0000000000000000 R09: 0000561b7e0bcac2 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000 R13: 0000000200000000 R14: 0000381800000000 R15: 0000000000000000 </TASK> Reviewed-by: Jason Gunthorpe <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alex Williamson <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Please suggest where can I read a list of features of your patch?
Is this:
https://wiki.archlinux.org/index.php/Security#Kernel_hardening
actual one?
Where is a complete list of kernel options introduced by your patch?
Recommended settings for the most hardened config for a simple communication server without any need of Desktop GUI, skype, etc.
I need just a simple mail server (postfix, dovecot, rspamd), most likely on Parabola with OpenRC, but prefer 32 bit running in a full software emulated VM.
The text was updated successfully, but these errors were encountered: