Build date: 1732691117 - Wed Nov 27 07:05:17 UTC 2024 Build cvs date: 1732685241 - Wed Nov 27 05:27:21 UTC 2024 Build id: 2024-11-27.2 Build tags: amd64-regress sysupgrade Applied the following diff(s): /home/anton/tmp/robsd/src-lldb.diff /home/anton/tmp/robsd/src-sys-em.diff /home/anton/tmp/robsd/src-sys-newvers.diff /home/anton/tmp/robsd/src-sys-uhidev-sispm.diff /home/anton/tmp/robsd/src-sysupgrade.diff ? regress/sys/kern/ptrace/xstate P distrib/sets/lists/man/mi P lib/libc/sys/ptrace.2 P regress/sys/kern/ptrace/Makefile U regress/sys/kern/ptrace/xstate/Makefile U regress/sys/kern/ptrace/xstate/avx.S U regress/sys/kern/ptrace/xstate/xstate.c P sys/arch/amd64/amd64/process_machdep.c P sys/arch/amd64/include/ptrace.h P sys/dev/pci/if_iavf.c P sys/dev/pv/xen.c P sys/kern/kern_rwlock.c P sys/kern/sys_process.c P sys/sys/rwlock.h commit olqZwkqDlzLxeeeI Author: anton Date: 2024/11/27 05:27:21 hook up ptrace xstate regress regress/sys/kern/ptrace/Makefile commit c8yyrfOqEu30cYTT Author: anton Date: 2024/11/27 05:26:58 Add ptrace xstate regress suite. regress/sys/kern/ptrace/xstate/Makefile regress/sys/kern/ptrace/xstate/avx.S regress/sys/kern/ptrace/xstate/xstate.c commit uYQOHir0PPYyROai Author: anton Date: 2024/11/27 05:25:57 Add ptrace commands used to read/write the XSAVE area of a traced process. Intended to give debuggers access to xmm/ymm registers. Inspired by FreeBSD which exposes a similar set of ptrace commands. ok kettenis@ lib/libc/sys/ptrace.2 sys/arch/amd64/amd64/process_machdep.c sys/arch/amd64/include/ptrace.h sys/kern/sys_process.c commit XS6QeeasG8BoWfsr Author: deraadt Date: 2024/11/27 04:05:47 sync distrib/sets/lists/man/mi commit pKDKkqzkcRRLYGsS Author: yasuoka Date: 2024/11/27 02:40:53 Enable rx/tx checksum offloading on ivaf(4). from Yuichiro NAITO and jan; test jan ok jan jmatthew sys/dev/pci/if_iavf.c commit icrG9MiFdVM0VEvL Author: jsg Date: 2024/11/27 02:38:35 continue enumerating devices if a device is not matched fixes xbf(4) and xnf(4) not attaching on XCP-ng 8.3/Xen 4.17 which has "device/9pfs/" from Joel Knight sys/dev/pv/xen.c commit Pm6O6WWoxsP3qXHY Author: jsg Date: 2024/11/27 02:14:48 zero attach args; return on missing properties will be removed sys/dev/pv/xen.c commit whqN1iLhdVTRkQlU Author: dlg Date: 2024/11/27 01:02:03 rework rwlocks to reduce pressure on the scheduler and SCHED_LOCK it's become obvious that heavily contended rwlocks put a lot of pressure on the scheduler, and we have enough contended locks that there's a benefit to changing rwlocks to try and mitigate against that pressure. when a thread is waiting on an rwlock, it sets a bit in the rwlock to indicate that when the current owner of the rwlock leaves the critical section, it should wake up the waiting thread to try and take the lock. if there's no waiting thread, the owner can skip the wakeup. the problem is that rwlocks can't tell the difference between one waiting thread and more than one waiting thread. so when the "there's a thread waiting" bit is cleared, all the waiting threads are woken up. one of these woken threads will take ownership of the lock, but also importantly, the other threads will end up setting the "im waiting" bit again, which is necessary for them to be woken up by the 2nd thread that won the race to become the owner of the lock. this is compounded by pending writers and readers waiting on the same wait channel. an rwlock may have one pending writer trying to take the lock, but many readers waiting for it too. it would make sense to wake up only the writer so it can take the lock next, but we end up waking the readers at the same time. the result of this is that contended rwlocks wake up a lot of threads, which puts a lot of pressure on the scheduler. this is noticeable as a lot of contention on the scheduler lock, which is a spinning lock that increases time used by the system. this is a pretty classic thundering herd problem. this change mitigates against these wakeups by adding counters to rwlocks for the number threads waiting to take write and read locks instead of relying on bits. when a thread needs to wait for a rwlock it increments the relevant counter before sleeping. after it is woken up and takes the lock it decrements that counter. this means rwlocks know how many threads are waiting at all times without having to wake everything up to rebuild state every time a thread releases the lock. pending writers and readers also wait on separate wchans. this allows us to prioritise writers and to wake them up one at a time. once there's no pending writers all pending readers can be woken up in one go so they can share the lock as soon as possible. if you are suffering a contended rwlock, this should reduce the amount of time spent spinning on the sched lock, which in turn may also reduce the wall clock time doing that work. the only downside to this change in my opinion is that it grows struct rwlock by 8 bytes. if we can reduce rwlock contention in the future, i reckon i could shrink the rwlock struct again while still avoiding some of the scheduler interactions. work with claudio@ ok claudio@ mpi@ stsp@ testing by many including claudio@ landry@ stsp@ sthen@ phessler@ tb@ and mark patruck sys/kern/kern_rwlock.c sys/sys/rwlock.h