Build date: 1775944802 - Sat Apr 11 22:00:02 UTC 2026 Build cvs date: 1775927095 - Sat Apr 11 17:04:55 UTC 2026 Build id: 2026-04-12.1 Build tags: amd64-regress ports sysupgrade Applied the following diff(s): /home/anton/tmp/robsd/src-sys-em.diff /home/anton/tmp/robsd/src-sys-uhidev-sispm.diff /home/anton/tmp/robsd/src-sysupgrade.diff P lib/libc/sys/unveil.2 P sys/arch/amd64/amd64/vmm_machdep.c P sys/arch/amd64/isa/clock.c M sys/dev/usb/uhidev.c P sys/kern/exec_elf.c P sys/kern/kern_unveil.c P sys/uvm/uvm_pdaemon.c P sys/uvm/uvm_swap.c P sys/uvm/uvm_swap.h M usr.sbin/bgpd/session.c P usr.sbin/rpki-client/main.c commit xiPfDxl6dWxka5nm Author: deraadt Date: 2026/04/11 17:04:55 Before it is disabled, unveil allows you to override the settings on any vnode. A block of #if 0 code suggests this might be different. That can be deleted. This also shows one word "other" in the manual page is misleading. question asked by Stuart Thomas ok beck lib/libc/sys/unveil.2 sys/kern/kern_unveil.c commit zbxSCJ0AWcMm5HUV Author: deraadt Date: 2026/04/11 16:24:13 wrap the ; on a single while() line sys/arch/amd64/isa/clock.c commit JUA2ws01No0Tq2vl Author: deraadt Date: 2026/04/11 16:12:40 A binary without a PT_LOAD exec segment would later read a pinsyscall table and damage it strangely. Such a binary cannot actually run, but we should avoid the internal pinsyscall table damage, and fail the execve with EINVAL. reported by Stuart Thomas ok guenther sys/kern/exec_elf.c commit upLfovpW5VRffVE0 Author: cludwig Date: 2026/04/11 15:59:44 vmm: Handle reserved bits in debug registers vmm(4) handles the %dr6 debug register on VMX on its own. It is not part of the VMCB. The AMD and Intel SDMs mention that a 'MOV DRn' instruction traps with #GP when any of the upper 32 bits of %dr6/%dr7 is 1. Userland can set arbitrary values in that register, forcing an Intel machine to crash. An initial bogus %dr7 fails to launch the VM on both platforms. Reject such debug register values an all platforms. ok mlarkin@ Reported-by: syzbot+f386e2f64711877025a6@syzkaller.appspotmail.com sys/arch/amd64/amd64/vmm_machdep.c commit o7QRtXXDVIdMyscE Author: claudio Date: 2026/04/11 12:02:50 Call repo_check_timeout() before colleting the POLLOUT fds. Since repo_abort() called by repo_check_timeout() will add messages to be sent out. This brings back rev 1.263 which was accidentially reverted by rev 1.293 OK tb@ usr.sbin/rpki-client/main.c commit dX4JHNf7kyUaibFU Author: deraadt Date: 2026/04/11 01:57:22 When the pagedaemon is triggered to create free memory, there may be sleeping pmemrange allocations with multi-page alignment requirements which can't be satisfied by the simplistic freeing of (solo) pages which the pagedaemon performs. As we near starvation, fragmentation is the main problem. Our free list could be large enough that the pagedaemon sees no reason to do more work, but also too fragmented to satisfy a pending allocation request with complex requirements (imagine asking for 512K of physically linear memory which is DMA reachable). When the requirement isn't satisfied, the pagedaemon is told to try again, but again doesn't mean harder because it has no mechanism to try harder. It's tracking variables do not show the fragmentation problem. It spins a lot. Often this becomes a deadlock. Time to change strategy: Overshoot creation of (both) inactive and free pages each time through the loop. After inspecting existing variables, we generate minumum 128 inactive pages (which may be dynamically drawn down asyncronously by accesses), and then try to convert minumum 128 inactives into free pages (different pages get freed different ways, including via swapcluster which has been improved in previous uvm_swap.c commit to absorb more pressure and indicate when it is full). As we mow through the freelist, this will eventually create some (physical address space) defragmention and satisfy these complex requirements. Maybe not on the first round, but it will keep trying. Before this change, it was not trying at all. ok kettenis kirill beck sys/uvm/uvm_pdaemon.c commit jQ5yTjml7iQbjzjm Author: deraadt Date: 2026/04/11 01:36:23 To support swapencrypt, the swapcluster code has a memory allocation codepath. Since this is runs inside the pagedaemon that is unworkable. We'd like to encrypt the pages inplace for IO, but there are architectures not ready for a high-mem page to be written to a dma-restricted device (work in progress). So for now we need to bounce through dma-reachable memory buffer. A previous attempt had 1 extra bounce buffer, but then slept on allocation inside the pagedaemon context which is also unworkable. This version contains 32 pre-allocated swapclusters (64K each), and through a counter signals to the pagedaemon when it should stop trying to create memory. 32 swap clusters is comfortably more than the minimum we expect the pagedaemon frantically generate. This crummy solution is good enough until we the dma reach problem is solved (soon) ok kettenis kirill (who looked into other solutions) beck sys/uvm/uvm_pdaemon.c sys/uvm/uvm_swap.c sys/uvm/uvm_swap.h commit FCLxIGOoImIiJUEm Author: matthieu Date: 2026/04/10 19:24:47 update 3RDPARTY