-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multicast traffic is not forwarded to mrouters #2
Comments
Quoting from wiki, "IGMP Snooping / Forwarding Behaviour": igmp received on “mrouter” port
We must make difference between fdb entries corresponding to mrouter ports in the following cases:
New corner cases:
|
rrendec
pushed a commit
that referenced
this issue
Oct 9, 2013
Commit 84c1754 (ext4: move work from io_end to inode) triggered a regression when running xfstest #270 when the file system is mounted with dioread_nolock. The problem is that after ext4_evict_inode() calls ext4_ioend_wait(), this guarantees that last io_end structure has been freed, but it does not guarantee that the workqueue structure, which was moved into the inode by commit 84c1754, is actually finished. Once ext4_flush_completed_IO() calls ext4_free_io_end() on CPU #1, this will allow ext4_ioend_wait() to return on CPU #2, at which point the evict_inode() codepath can race against the workqueue code on CPU #1 accessing EXT4_I(inode)->i_unwritten_work to find the next item of work to do. Fix this by calling cancel_work_sync() in ext4_ioend_wait(), which will be renamed ext4_ioend_shutdown(), since it is only used by ext4_evict_inode(). Also, move the call to ext4_ioend_shutdown() until after truncate_inode_pages() and filemap_write_and_wait() are called, to make sure all dirty pages have been written back and flushed from the page cache first. BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<c01dda6a>] cwq_activate_delayed_work+0x3b/0x7e *pdpt = 0000000030bc3001 *pde = 0000000000000000 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC Modules linked in: Pid: 6, comm: kworker/u:0 Not tainted 3.8.0-rc3-00013-g84c1754-dirty #91 Bochs Bochs EIP: 0060:[<c01dda6a>] EFLAGS: 00010046 CPU: 0 EIP is at cwq_activate_delayed_work+0x3b/0x7e EAX: 00000000 EBX: 00000000 ECX: f505fe54 EDX: 00000000 ESI: ed5b697c EDI: 00000006 EBP: f64b7e8c ESP: f64b7e84 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 CR0: 8005003b CR2: 00000000 CR3: 30bc2000 CR4: 000006f0 DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000 DR6: ffff0ff0 DR7: 00000400 Process kworker/u:0 (pid: 6, ti=f64b6000 task=f64b4160 task.ti=f64b6000) Stack: f505fe00 00000006 f64b7e9c c01de3d7 f6435540 00000003 f64b7efc c01def1d f6435540 00000002 00000000 0000008a c16d0808 c040a10b c16d07d8 c16d08b0 f505fe00 c16d0780 00000000 00000000 ee153df4 c1ce4a30 c17d0e30 00000000 Call Trace: [<c01de3d7>] cwq_dec_nr_in_flight+0x71/0xfb [<c01def1d>] process_one_work+0x5d8/0x637 [<c040a10b>] ? ext4_end_bio+0x300/0x300 [<c01e3105>] worker_thread+0x249/0x3ef [<c01ea317>] kthread+0xd8/0xeb [<c01e2ebc>] ? manage_workers+0x4bb/0x4bb [<c023a370>] ? trace_hardirqs_on+0x27/0x37 [<c0f1b4b7>] ret_from_kernel_thread+0x1b/0x28 [<c01ea23f>] ? __init_kthread_worker+0x71/0x71 Code: 01 83 15 ac ff 6c c1 00 31 db 89 c6 8b 00 a8 04 74 12 89 c3 30 db 83 05 b0 ff 6c c1 01 83 15 b4 ff 6c c1 00 89 f0 e8 42 ff ff ff <8b> 13 89 f0 83 05 b8 ff 6c c1 6c c1 00 31 c9 83 EIP: [<c01dda6a>] cwq_activate_delayed_work+0x3b/0x7e SS:ESP 0068:f64b7e84 CR2: 0000000000000000 ---[ end trace a1923229da53d8a4 ]--- Signed-off-by: "Theodore Ts'o" <[email protected]> Cc: Jan Kara <[email protected]>
rrendec
pushed a commit
that referenced
this issue
Oct 9, 2013
We can deadlock (s_active and fcoe_config_mutex) if a port is being destroyed at the same time one is being created. [ 4200.503113] ====================================================== [ 4200.503114] [ INFO: possible circular locking dependency detected ] [ 4200.503116] 3.8.0-rc5+ #8 Not tainted [ 4200.503117] ------------------------------------------------------- [ 4200.503118] kworker/3:2/2492 is trying to acquire lock: [ 4200.503119] (s_active#292){++++.+}, at: [<ffffffff8122d20b>] sysfs_addrm_finish+0x3b/0x70 [ 4200.503127] but task is already holding lock: [ 4200.503128] (fcoe_config_mutex){+.+.+.}, at: [<ffffffffa02f3338>] fcoe_destroy_work+0xe8/0x120 [fcoe] [ 4200.503133] which lock already depends on the new lock. [ 4200.503135] the existing dependency chain (in reverse order) is: [ 4200.503136] -> #1 (fcoe_config_mutex){+.+.+.}: [ 4200.503139] [<ffffffff810c7711>] lock_acquire+0xa1/0x140 [ 4200.503143] [<ffffffff816ca7be>] mutex_lock_nested+0x6e/0x360 [ 4200.503146] [<ffffffffa02f11bd>] fcoe_enable+0x1d/0xb0 [fcoe] [ 4200.503148] [<ffffffffa02f127d>] fcoe_ctlr_enabled+0x2d/0x50 [fcoe] [ 4200.503151] [<ffffffffa02ffbe8>] store_ctlr_enabled+0x38/0x90 [libfcoe] [ 4200.503154] [<ffffffff81424878>] dev_attr_store+0x18/0x30 [ 4200.503157] [<ffffffff8122b750>] sysfs_write_file+0xe0/0x150 [ 4200.503160] [<ffffffff811b334c>] vfs_write+0xac/0x180 [ 4200.503162] [<ffffffff811b3692>] sys_write+0x52/0xa0 [ 4200.503164] [<ffffffff816d7159>] system_call_fastpath+0x16/0x1b [ 4200.503167] -> #0 (s_active#292){++++.+}: [ 4200.503170] [<ffffffff810c680f>] __lock_acquire+0x135f/0x1c90 [ 4200.503172] [<ffffffff810c7711>] lock_acquire+0xa1/0x140 [ 4200.503174] [<ffffffff8122c626>] sysfs_deactivate+0x116/0x160 [ 4200.503176] [<ffffffff8122d20b>] sysfs_addrm_finish+0x3b/0x70 [ 4200.503178] [<ffffffff8122b2eb>] sysfs_hash_and_remove+0x5b/0xb0 [ 4200.503180] [<ffffffff8122f3d1>] sysfs_remove_group+0x61/0x100 [ 4200.503183] [<ffffffff814251eb>] device_remove_groups+0x3b/0x60 [ 4200.503185] [<ffffffff81425534>] device_remove_attrs+0x44/0x80 [ 4200.503187] [<ffffffff81425e97>] device_del+0x127/0x1c0 [ 4200.503189] [<ffffffff81425f52>] device_unregister+0x22/0x60 [ 4200.503191] [<ffffffffa0300970>] fcoe_ctlr_device_delete+0xe0/0xf0 [libfcoe] [ 4200.503194] [<ffffffffa02f1b5c>] fcoe_interface_cleanup+0x6c/0xa0 [fcoe] [ 4200.503196] [<ffffffffa02f3355>] fcoe_destroy_work+0x105/0x120 [fcoe] [ 4200.503198] [<ffffffff8107ee91>] process_one_work+0x1a1/0x580 [ 4200.503203] [<ffffffff81080c6e>] worker_thread+0x15e/0x440 [ 4200.503205] [<ffffffff8108715a>] kthread+0xea/0xf0 [ 4200.503207] [<ffffffff816d70ac>] ret_from_fork+0x7c/0xb0 [ 4200.503209] other info that might help us debug this: [ 4200.503211] Possible unsafe locking scenario: [ 4200.503212] CPU0 CPU1 [ 4200.503213] ---- ---- [ 4200.503214] lock(fcoe_config_mutex); [ 4200.503215] lock(s_active#292); [ 4200.503218] lock(fcoe_config_mutex); [ 4200.503219] lock(s_active#292); [ 4200.503221] *** DEADLOCK *** [ 4200.503223] 3 locks held by kworker/3:2/2492: [ 4200.503224] #0: (fcoe){.+.+.+}, at: [<ffffffff8107ee2b>] process_one_work+0x13b/0x580 [ 4200.503228] #1: ((&port->destroy_work)){+.+.+.}, at: [<ffffffff8107ee2b>] process_one_work+0x13b/0x580 [ 4200.503232] #2: (fcoe_config_mutex){+.+.+.}, at: [<ffffffffa02f3338>] fcoe_destroy_work+0xe8/0x120 [fcoe] [ 4200.503236] stack backtrace: [ 4200.503238] Pid: 2492, comm: kworker/3:2 Not tainted 3.8.0-rc5+ #8 [ 4200.503240] Call Trace: [ 4200.503243] [<ffffffff816c2f09>] print_circular_bug+0x1fb/0x20c [ 4200.503246] [<ffffffff810c680f>] __lock_acquire+0x135f/0x1c90 [ 4200.503248] [<ffffffff810c463a>] ? debug_check_no_locks_freed+0x9a/0x180 [ 4200.503250] [<ffffffff810c7711>] lock_acquire+0xa1/0x140 [ 4200.503253] [<ffffffff8122d20b>] ? sysfs_addrm_finish+0x3b/0x70 [ 4200.503255] [<ffffffff8122c626>] sysfs_deactivate+0x116/0x160 [ 4200.503258] [<ffffffff8122d20b>] ? sysfs_addrm_finish+0x3b/0x70 [ 4200.503260] [<ffffffff8122d20b>] sysfs_addrm_finish+0x3b/0x70 [ 4200.503262] [<ffffffff8122b2eb>] sysfs_hash_and_remove+0x5b/0xb0 [ 4200.503265] [<ffffffff8122f3d1>] sysfs_remove_group+0x61/0x100 [ 4200.503273] [<ffffffff814251eb>] device_remove_groups+0x3b/0x60 [ 4200.503275] [<ffffffff81425534>] device_remove_attrs+0x44/0x80 [ 4200.503277] [<ffffffff81425e97>] device_del+0x127/0x1c0 [ 4200.503279] [<ffffffff81425f52>] device_unregister+0x22/0x60 [ 4200.503282] [<ffffffffa0300970>] fcoe_ctlr_device_delete+0xe0/0xf0 [libfcoe] [ 4200.503285] [<ffffffffa02f1b5c>] fcoe_interface_cleanup+0x6c/0xa0 [fcoe] [ 4200.503287] [<ffffffffa02f3355>] fcoe_destroy_work+0x105/0x120 [fcoe] [ 4200.503290] [<ffffffff8107ee91>] process_one_work+0x1a1/0x580 [ 4200.503292] [<ffffffff8107ee2b>] ? process_one_work+0x13b/0x580 [ 4200.503295] [<ffffffffa02f3250>] ? fcoe_if_destroy+0x230/0x230 [fcoe] [ 4200.503297] [<ffffffff81080c6e>] worker_thread+0x15e/0x440 [ 4200.503299] [<ffffffff81080b10>] ? busy_worker_rebind_fn+0x100/0x100 [ 4200.503301] [<ffffffff8108715a>] kthread+0xea/0xf0 [ 4200.503304] [<ffffffff81087070>] ? kthread_create_on_node+0x160/0x160 [ 4200.503306] [<ffffffff816d70ac>] ret_from_fork+0x7c/0xb0 [ 4200.503308] [<ffffffff81087070>] ? kthread_create_on_node+0x160/0x160 Signed-off-by: Robert Love <[email protected]> Tested-by: Jack Morgan <[email protected]>
rrendec
pushed a commit
that referenced
this issue
Oct 9, 2013
The sched_clock_remote() implementation has the following inatomicity problem on 32bit systems when accessing the remote scd->clock, which is a 64bit value. CPU0 CPU1 sched_clock_local() sched_clock_remote(CPU0) ... remote_clock = scd[CPU0]->clock read_low32bit(scd[CPU0]->clock) cmpxchg64(scd->clock,...) read_high32bit(scd[CPU0]->clock) While the update of scd->clock is using an atomic64 mechanism, the readout on the remote cpu is not, which can cause completely bogus readouts. It is a quite rare problem, because it requires the update to hit the narrow race window between the low/high readout and the update must go across the 32bit boundary. The resulting misbehaviour is, that CPU1 will see the sched_clock on CPU1 ~4 seconds ahead of it's own and update CPU1s sched_clock value to this bogus timestamp. This stays that way due to the clamping implementation for about 4 seconds until the synchronization with CLOCK_MONOTONIC undoes the problem. The issue is hard to observe, because it might only result in a less accurate SCHED_OTHER timeslicing behaviour. To create observable damage on realtime scheduling classes, it is necessary that the bogus update of CPU1 sched_clock happens in the context of an realtime thread, which then gets charged 4 seconds of RT runtime, which results in the RT throttler mechanism to trigger and prevent scheduling of RT tasks for a little less than 4 seconds. So this is quite unlikely as well. The issue was quite hard to decode as the reproduction time is between 2 days and 3 weeks and intrusive tracing makes it less likely, but the following trace recorded with trace_clock=global, which uses sched_clock_local(), gave the final hint: <idle>-0 0d..30 400269.477150: hrtimer_cancel: hrtimer=0xf7061e80 <idle>-0 0d..30 400269.477151: hrtimer_start: hrtimer=0xf7061e80 ... irq/20-S-587 1d..32 400273.772118: sched_wakeup: comm= ... target_cpu=0 <idle>-0 0dN.30 400273.772118: hrtimer_cancel: hrtimer=0xf7061e80 What happens is that CPU0 goes idle and invokes sched_clock_idle_sleep_event() which invokes sched_clock_local() and CPU1 runs a remote wakeup for CPU0 at the same time, which invokes sched_remote_clock(). The time jump gets propagated to CPU0 via sched_remote_clock() and stays stale on both cores for ~4 seconds. There are only two other possibilities, which could cause a stale sched clock: 1) ktime_get() which reads out CLOCK_MONOTONIC returns a sporadic wrong value. 2) sched_clock() which reads the TSC returns a sporadic wrong value. #1 can be excluded because sched_clock would continue to increase for one jiffy and then go stale. #2 can be excluded because it would not make the clock jump forward. It would just result in a stale sched_clock for one jiffy. After quite some brain twisting and finding the same pattern on other traces, sched_clock_remote() remained the only place which could cause such a problem and as explained above it's indeed racy on 32bit systems. So while on 64bit systems the readout is atomic, we need to verify the remote readout on 32bit machines. We need to protect the local->clock readout in sched_clock_remote() on 32bit as well because an NMI could hit between the low and the high readout, call sched_clock_local() and modify local->clock. Thanks to Siegfried Wulsch for bearing with my debug requests and going through the tedious tasks of running a bunch of reproducer systems to generate the debug information which let me decode the issue. Reported-by: Siegfried Wulsch <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1304051544160.21884@ionos Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected]
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When multicast group is established (the first igmp subscriber appears, either user-defined or dynamic), all mrouter ports in the same vlan as the multicast group must implicitly become group members.
This is currently worked around in userspace by "show ip igmp snooping groups" command handler by fetching the mrouter list from kernel and adding those interfaces to the (internally-cooked) group list structure. The problem is that multicast traffic belonging to that group is not actually forwarded to mrouter ports.
Fixing approach: also create special fdb entries for mrouter ports, but only when multicast group is actually established (the first igmp subscriber appears, either user-defined or dynamic).
Corner cases:
established groups in vlan;
special fdb entries must be removed
Synchronization: the general fdb synchronization approach (test -> lock -> test again -> change -> unlock) is also suitable for igmp snooping special entries. Note that by design this mechanism allows changing the fdb from both process and softirq (packet processing) context. Thus it is suitable for igmp snooping, as changes occur from both contexts.
The text was updated successfully, but these errors were encountered: