GENERIC from Mon Feb 18 03:09:17 2013 +0200, r246926+vm1 8fc9bb3, vmcore.27 GDB: no debug ports present KDB: debugger backends: ddb KDB: current backend: ddb Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 10.0-CURRENT #0 r246926+8fc9bb3: Mon Feb 18 19:56:44 CET 2013 pho@x4.osted.lan:/var/tmp/deviant2/sys/amd64/compile/PHO amd64 gcc version 4.2.1 20070831 patched [FreeBSD] WARNING: WITNESS option enabled, expect reduced performance. WARNING: DIAGNOSTIC option enabled, expect reduced performance. CPU: AMD Phenom(tm) 9150e Quad-Core Processor (1800.01-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0x100f23 Family = 0x10 Model = 0x2 Stepping = 3 Features=0x178bfbff Features2=0x802009 AMD Features=0xee500800 AMD Features2=0x7ff TSC: P-state invariant real memory = 8589934592 (8192 MB) avail memory = 8098955264 (7723 MB) : Trying to mount root from ufs:/dev/ufs/root [rw]... Setting hostuuid: 00000000-0000-0000-0000-00218515337d. Setting hostid: 0x6b64ac17. Starting ddb. Entropy harvesting: interrupts ethernet point_to_point kickstart. Starting file system checks: /dev/ufs/root: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/root: clean, 449226 free (898 frags, 56041 blocks, 0.1% fragmentation) /dev/ufs/home: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/home: clean, 114004 free (5308 frags, 13587 blocks, 0.5% fragmentation) /dev/ufs/usr: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/label/tmp: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/usr: clean, 4293220 free (174508 frags, 514839 blocks, 1.7% fragmentation) /dev/label/tmp: clean, 40522474 free (8586 frags, 5064236 blocks, 0.0% fragmentation) /dev/ufs/var: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/var: clean, 12249359 free (45399 frags, 1525495 blocks, 0.2% fragmentation) Mounting local file systems:. Setting hostname: x4.osted.lan. re0: link state changed to DOWN Starting Network: lo0 re0. lo0: flags=8049 metric 0 mtu 16384 options=600003 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 nd6 options=21 re0: flags=8843 metric 0 mtu 1500 options=8209b ether 00:21:85:15:33:7d inet 192.168.1.101 netmask 0xffffff00 broadcast 192.168.1.255 inet6 fe80::221:85ff:fe15:337d%re0 prefixlen 64 tentative scopeid 0x1 nd6 options=29 media: Ethernet autoselect (none) status: no carrier Starting devd. add net default: gateway 192.168.1.1 add net ::ffff:0.0.0.0: gateway ::1 add net ::0.0.0.0: gateway ::1 add net fe80::: gateway ::1 add net ff02::: gateway ::1 ELF ldconfig path: /lib /usr/lib /usr/lib/compat /usr/local/lib /usr/local/lib/compat/pkg /usr/local/kde4/lib /usr/local/lib/compat/pkg /usr/local/lib/qt4 32-bit compatibility ldconfig path: /usr/lib32 Creating and/or trimming log files. Starting syslogd. savecore: unable to read from bounds, using 0 savecore: couldn't find media and/or sector size of /var/crash: Inappropriate ioctl for device Feb 18 20:22:07 x4 savecore: couldn't find media and/or sector size of /var/crash: Inappropriate ioctl for device savecore: unable to read from bounds, using 0 No core dumps found. Additional ABI support: linux. Starting rpcbind. NFS access cache time=60 lock order reversal: 1st 0xffffff81e6e6d238 bufwait (bufwait) @ kern/vfs_bio.c:3010 2nd 0xfffffe000b532000 dirhash (dirhash) @ ufs/ufs/ufs_dirhash.c:284 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff82477ba400 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff82477ba4c0 _witness_debugger() at _witness_debugger+0x2c/frame 0xffffff82477ba4e0 witness_checkorder() at witness_checkorder+0x82d/frame 0xffffff82477ba590 _sx_xlock() at _sx_xlock+0x74/frame 0xffffff82477ba5c0 ufsdirhash_acquire() at ufsdirhash_acquire+0x44/frame 0xffffff82477ba5e0 ufsdirhash_add() at ufsdirhash_add+0x19/frame 0xffffff82477ba610 ufs_direnter() at ufs_direnter+0x6c1/frame 0xffffff82477ba6e0 ufs_mkdir() at ufs_mkdir+0x50e/frame 0xffffff82477ba8d0 VOP_MKDIR_APV() at VOP_MKDIR_APV+0xaa/frame 0xffffff82477ba8f0 kern_mkdirat() at kern_mkdirat+0x212/frame 0xffffff82477baad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477babf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477babf0 --- syscall (136, FreeBSD ELF64, sys_mkdir), rip = 0x80092532a, rsp = 0x7fffffffd788, rbp = 0x801006050 --- Clearing /tmp (X related). Starting mountd. Starting nfsd. Recovering vi editor sessions:. Updating motd:. Starting ntpd. Configuring syscons: keymap blanktime. Starting sshd. Starting cron. Local package initialization: watchdogd. Starting default moused. Starting inetd. Mon Feb 18 20:22:11 CET 2013 FreeBSD/amd64 (x4.osted.lan) (console) login: Feb 18 20:23:47 x4 su: pho to root on /dev/pts/0 lock order reversal: 1st 0xfffffe0119d32c98 ufs (ufs) @ kern/vfs_subr.c:2176 2nd 0xffffff81e75ec038 bufwait (bufwait) @ ufs/ffs/ffs_vnops.c:261 3rd 0xfffffe01973f0068 ufs (ufs) @ kern/vfs_subr.c:2176 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff8247962dd0 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff8247962e90 _witness_debugger() at _witness_debugger+0x2c/frame 0xffffff8247962eb0 witness_checkorder() at witness_checkorder+0x82d/frame 0xffffff8247962f60 __lockmgr_args() at __lockmgr_args+0x1125/frame 0xffffff8247963040 ffs_lock() at ffs_lock+0x9b/frame 0xffffff8247963090 VOP_LOCK1_APV() at VOP_LOCK1_APV+0x88/frame 0xffffff82479630b0 _vn_lock() at _vn_lock+0x8e/frame 0xffffff8247963130 vget() at vget+0x63/frame 0xffffff8247963180 vfs_hash_get() at vfs_hash_get+0xd5/frame 0xffffff82479631d0 ffs_vgetf() at ffs_vgetf+0x48/frame 0xffffff8247963260 softdep_sync_buf() at softdep_sync_buf+0x397/frame 0xffffff8247963340 ffs_syncvnode() at ffs_syncvnode+0x311/frame 0xffffff82479633c0 ffs_truncate() at ffs_truncate+0x10ef/frame 0xffffff8247963610 ufs_direnter() at ufs_direnter+0x538/frame 0xffffff82479636e0 ufs_mkdir() at ufs_mkdir+0x50e/frame 0xffffff82479638d0 VOP_MKDIR_APV() at VOP_MKDIR_APV+0xaa/frame 0xffffff82479638f0 kern_mkdirat() at kern_mkdirat+0x212/frame 0xffffff8247963ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247963bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247963bf0 --- syscall (136, FreeBSD ELF64, sys_mkdir), rip = 0x80092532a, rsp = 0x7fffffffd488, rbp = 0x7fffffffd931 --- 20130218 22:37:24 all: syscall5.sh lock order reversal: 1st 0xfffffe000b658548 ufs (ufs) @ kern/vfs_mount.c:1236 2nd 0xfffffe002aefd068 devfs (devfs) @ ufs/ffs/ffs_vfsops.c:1391 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff8247c24510 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff8247c245d0 _witness_debugger() at _witness_debugger+0x2c/frame 0xffffff8247c245f0 witness_checkorder() at witness_checkorder+0x82d/frame 0xffffff8247c246a0 __lockmgr_args() at __lockmgr_args+0x1125/frame 0xffffff8247c24780 vop_stdlock() at vop_stdlock+0x39/frame 0xffffff8247c247a0 VOP_LOCK1_APV() at VOP_LOCK1_APV+0x88/frame 0xffffff8247c247c0 _vn_lock() at _vn_lock+0x8e/frame 0xffffff8247c24840 ffs_flushfiles() at ffs_flushfiles+0x109/frame 0xffffff8247c248a0 softdep_flushfiles() at softdep_flushfiles+0x64/frame 0xffffff8247c24900 ffs_unmount() at ffs_unmount+0x1d1/frame 0xffffff8247c24970 dounmount() at dounmount+0x2c9/frame 0xffffff8247c249e0 sys_unmount() at sys_unmount+0x38e/frame 0xffffff8247c24ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247c24bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247c24bf0 --- syscall (22, FreeBSD ELF64, sys_unmount), rip = 0x8008841fa, rsp = 0x7fffffffceb8, rbp = 0x801006ce8 --- 20130218 23:07:54 all: kinfo3.sh 20130218 23:23:06 all: isofs.sh 20130218 23:23:12 all: suj12.sh GEOM_NOP: Device md5.nop created. GEOM_NOP: Device md5.nop removed. 20130218 23:43:34 all: lockf2.sh 20130218 23:48:49 all: snap2-1.sh lock order reversal: 1st 0xffffff81e6e42338 bufwait (bufwait) @ kern/vfs_bio.c:3010 2nd 0xfffffe019ab0b330 snaplk (snaplk) @ ufs/ffs/ffs_snapshot.c:2298 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff8234c3a630 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff8234c3a6f0 _witness_debugger() at _witness_debugger+0x2c/frame 0xffffff8234c3a710 witness_checkorder() at witness_checkorder+0x82d/frame 0xffffff8234c3a7c0 __lockmgr_args() at __lockmgr_args+0x1125/frame 0xffffff8234c3a8a0 ffs_copyonwrite() at ffs_copyonwrite+0x199/frame 0xffffff8234c3a940 ffs_geom_strategy() at ffs_geom_strategy+0x1ba/frame 0xffffff8234c3a970 bufwrite() at bufwrite+0x125/frame 0xffffff8234c3a9a0 ffs_sbupdate() at ffs_sbupdate+0x9d/frame 0xffffff8234c3aa00 ffs_sync() at ffs_sync+0x4b3/frame 0xffffff8234c3aac0 sync_fsync() at sync_fsync+0x136/frame 0xffffff8234c3aaf0 VOP_FSYNC_APV() at VOP_FSYNC_APV+0xa6/frame 0xffffff8234c3ab10 sched_sync() at sched_sync+0x335/frame 0xffffff8234c3aba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c3abf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c3abf0 --- trap 0, rip = 0, rsp = 0xffffff8234c3acb0, rbp = 0 --- lock order reversal: 1st 0xfffffe019ab0b330 snaplk (snaplk) @ ufs/ufs/ufs_vnops.c:968 2nd 0xfffffe0142d512d8 ufs (ufs) @ ufs/ffs/ffs_snapshot.c:1627 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a/frame 0xffffff8247c562b0 kdb_backtrace() at kdb_backtrace+0x37/frame 0xffffff8247c56370 _witness_debugger() at _witness_debugger+0x2c/frame 0xffffff8247c56390 witness_checkorder() at witness_checkorder+0x82d/frame 0xffffff8247c56440 __lockmgr_args() at __lockmgr_args+0x1125/frame 0xffffff8247c56520 ffs_snapremove() at ffs_snapremove+0xe2/frame 0xffffff8247c565a0 ffs_truncate() at ffs_truncate+0xd35/frame 0xffffff8247c567f0 ufs_inactive() at ufs_inactive+0x28c/frame 0xffffff8247c56830 VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0xa6/frame 0xffffff8247c56850 vinactive() at vinactive+0xb2/frame 0xffffff8247c568b0 vputx() at vputx+0x375/frame 0xffffff8247c56910 kern_unlinkat() at kern_unlinkat+0x19c/frame 0xffffff8247c56ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247c56bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247c56bf0 --- syscall (10, FreeBSD ELF64, sys_unlink), rip = 0x8009249da, rsp = 0x7fffffffd5f8, rbp = 0 --- 20130218 23:50:13 all: sem.sh 20130218 23:50:33 all: socketpair.sh kern.maxfiles limit exceeded by uid 0, please see tuning(7). 20130218 23:50:40 all: mountro.sh Feb 18 23:51:19 x4 kernel: pid 48416 (rw), uid 0 inumber 12290 on /mnt: filesystem full fsync: giving up on dirty 0xfffffe0142d7a270: tag devfs, type VCHR usecount 1, writecount 0, refcount 88 mountedhere 0xfffffe002aea1600 flags (VI_ACTIVE) v_object 0xfffffe0142eeda50 ref 0 pages 646 lock type devfs: EXCL by thread 0xfffffe011927b000 (pid 48618, umount, tid 100187) #0 0xffffffff808bf90f at __lockmgr_args+0x6df #1 0xffffffff80970539 at vop_stdlock+0x39 #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff8099032e at _vn_lock+0x8e #4 0xffffffff80b2c231 at softdep_flushworklist+0x61 #5 0xffffffff80b3018e at ffs_sync+0x3be #6 0xffffffff80978a0c at dounmount+0x2ac #7 0xffffffff8097911e at sys_unmount+0x38e #8 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #9 0xffffffff80c69467 at Xfast_syscall+0xf7 dev md5a fsync: giving up on dirty 0xfffffe0142d7a270: tag devfs, type VCHR usecount 1, writecount 0, refcount 88 mountedhere 0xfffffe002aea1600 flags (VI_ACTIVE) v_object 0xfffffe0142eeda50 ref 0 pages 646 lock type devfs: EXCL by thread 0xfffffe011927b000 (pid 48618, umount, tid 100187) #0 0xffffffff808bf90f at __lockmgr_args+0x6df #1 0xffffffff80970539 at vop_stdlock+0x39 #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff8099032e at _vn_lock+0x8e #4 0xffffffff80b30037 at ffs_sync+0x267 #5 0xffffffff80978a0c at dounmount+0x2ac #6 0xffffffff8097911e at sys_unmount+0x38e #7 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #8 0xffffffff80c69467 at Xfast_syscall+0xf7 dev md5a 20130218 23:52:36 all: ffs_blkfree.sh 20130218 23:53:04 all: fpclone.sh 20130218 23:55:06 all: suj3.sh cryptosoft0: on motherboard GEOM_ELI: Device md5.eli created. GEOM_ELI: Encryption: AES-XTS 128 GEOM_ELI: Crypto: software GEOM_ELI: md5 has been killed. GEOM_ELI: Device md5.eli destroyed. 20130219 00:01:21 all: linger3.sh 20130219 00:01:33 all: rename2.sh 20130219 00:35:04 all: msdos.sh 20130219 00:45:12 all: linger4.sh panic: scratch bp !B_KVAALLOC 0xffffff81e7d323a0 cpuid = 3 KDB: enter: panic [ thread pid 79066 tid 101117 ] Stopped at kdb_enter+0x3b: movq $0,0xa96bb2(%rip) db> run pho db:0:pho> bt Tracing pid 79066 tid 101117 td 0xfffffe0101923480 kdb_enter() at kdb_enter+0x3b/frame 0xffffff8247c8d1b0 vpanic() at vpanic+0xe1/frame 0xffffff8247c8d1f0 kassert_panic() at kassert_panic+0xd3/frame 0xffffff8247c8d2e0 getblk() at getblk+0x89c/frame 0xffffff8247c8d370 breadn_flags() at breadn_flags+0x40/frame 0xffffff8247c8d3c0 ffs_balloc_ufs2() at ffs_balloc_ufs2+0xc76/frame 0xffffff8247c8d590 ufs_direnter() at ufs_direnter+0x318/frame 0xffffff8247c8d660 ufs_makeinode() at ufs_makeinode+0x359/frame 0xffffff8247c8d820 VOP_CREATE_APV() at VOP_CREATE_APV+0xa7/frame 0xffffff8247c8d840 vn_open_cred() at vn_open_cred+0x2be/frame 0xffffff8247c8d990 kern_openat() at kern_openat+0x20e/frame 0xffffff8247c8dad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247c8dbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247c8dbf0 --- syscall (5, FreeBSD ELF64, sys_open), rip = 0x80092cb9a, rsp = 0x7fffffffd558, rbp = 0x7fffffffd5e0 --- db:0:bt> show allpcpu Current CPU: 3 cpuid = 0 dynamic pcpu = 0x5d7480 curthread = 0xfffffe01aaa76900: pid 79067 "linger4" curpcb = 0xffffff8247c15cc0 fpcurthread = none idlethread = 0xfffffe0005215480: tid 100003 "idle: cpu0" curpmap = 0xfffffe000bc10750 tssp = 0xffffffff81568200 commontssp = 0xffffffff81568200 rsp0 = 0xffffff8247c15cc0 gs32p = 0xffffffff81566338 ldt = 0xffffffff81566378 tss = 0xffffffff81566368 spin locks held: cpuid = 1 dynamic pcpu = 0xffffff807ef23480 curthread = 0xfffffe011927b480: pid 79068 "linger4" curpcb = 0xffffff8247986cc0 fpcurthread = none idlethread = 0xfffffe0005215000: tid 100004 "idle: cpu1" curpmap = 0xfffffe000bbcfa60 tssp = 0xffffffff81568268 commontssp = 0xffffffff81568268 rsp0 = 0xffffff8247986cc0 gs32p = 0xffffffff815663a0 ldt = 0xffffffff815663e0 tss = 0xffffffff815663d0 spin locks held: cpuid = 2 dynamic pcpu = 0xffffff807ef2a480 curthread = 0xfffffe0008f44480: pid 18 "softdepflush" curpcb = 0xffffff8234c44cc0 fpcurthread = none idlethread = 0xfffffe0005221900: tid 100005 "idle: cpu2" curpmap = 0xffffffff813586b0 tssp = 0xffffffff815682d0 commontssp = 0xffffffff815682d0 rsp0 = 0xffffff8234c44cc0 gs32p = 0xffffffff81566408 ldt = 0xffffffff81566448 tss = 0xffffffff81566438 spin locks held: cpuid = 3 dynamic pcpu = 0xffffff807ef31480 curthread = 0xfffffe0101923480: pid 79066 "linger4" curpcb = 0xffffff8247c8dcc0 fpcurthread = none idlethread = 0xfffffe0005221480: tid 100006 "idle: cpu3" curpmap = 0xfffffe00d328d440 tssp = 0xffffffff81568338 commontssp = 0xffffffff81568338 rsp0 = 0xffffff8247c8dcc0 gs32p = 0xffffffff81566470 ldt = 0xffffffff815664b0 tss = 0xffffffff815664a0 spin locks held: db:0:allpcpu> show alllocks Process 79068 (linger4) thread 0xfffffe011927b480 (100170) exclusive lockmgr ufs (ufs) r = 0 (0xfffffe00d3c23548) locked @ kern/vfs_subr.c:896 exclusive lockmgr ufs (ufs) r = 0 (0xfffffe0101d0d2d8) locked @ kern/vfs_lookup.c:516 Process 79067 (linger4) thread 0xfffffe01aaa76900 (100301) exclusive sleep mutex vnode interlock (vnode interlock) r = 0 (0xfffffe01aa5f4880) locked @ kern/vfs_subr.c:2254 exclusive lockmgr ufs (ufs) r = 0 (0xfffffe01aa5f47b8) locked @ kern/vfs_vnops.c:358 Process 79066 (linger4) thread 0xfffffe0101923480 (101117) exclusive lockmgr bufwait (bufwait) r = 0 (0xffffff81e7d32438) locked @ kern/vfs_bio.c:2264 exclusive lockmgr bufwait (bufwait) r = 0 (0xffffff81e6e6bd38) locked @ kern/vfs_bio.c:3010 exclusive lockmgr ufs (ufs) r = 0 (0xfffffe00d3e287b8) locked @ ufs/ffs/ffs_vfsops.c:1690 exclusive lockmgr ufs (ufs) r = 0 (0xfffffe0101f6c7b8) locked @ kern/vfs_lookup.c:516 Process 79065 (linger4) thread 0xfffffe0101b4a480 (101343) exclusive lockmgr ufs (ufs) r = 0 (0xfffffe007063ec98) locked @ kern/vfs_lookup.c:516 Process 18 (softdepflush) thread 0xfffffe0008f44480 (100075) exclusive sleep mutex Softdep Lock (Softdep Lock) r = 0 (0xffffffff8155b180) locked @ ufs/ffs/ffs_softdep.c:9601 db:0:alllocks> show lockedvnods Locked vnodes 0xfffffe007063ec30: tag ufs, type VDIR usecount 2, writecount 0, refcount 15 mountedhere 0 flags (VI_ACTIVE) v_object 0xfffffe01e27d2780 ref 0 pages 87 lock type ufs: EXCL by thread 0xfffffe0101b4a480 (pid 79065, linger4, tid 101343) #0 0xffffffff808bff64 at __lockmgr_args+0xd34 #1 0xffffffff80b34a0b at ffs_lock+0x9b #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff8099032e at _vn_lock+0x8e #4 0xffffffff80975407 at lookup+0xc7 #5 0xffffffff809763b0 at namei+0x400 #6 0xffffffff80991cfb at vn_open_cred+0xbb #7 0xffffffff8098c91e at kern_openat+0x20e #8 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #9 0xffffffff80c69467 at Xfast_syscall+0xf7 ino 100278, on dev md5a 0xfffffe0101f6c750: tag ufs, type VDIR usecount 2, writecount 0, refcount 16 mountedhere 0 flags (VI_ACTIVE) v_object 0xfffffe01e1fff5a0 ref 0 pages 96 lock type ufs: EXCL by thread 0xfffffe0101923480 (pid 79066, linger4, tid 101117) #0 0xffffffff808bff64 at __lockmgr_args+0xd34 #1 0xffffffff80b34a0b at ffs_lock+0x9b #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff8099032e at _vn_lock+0x8e #4 0xffffffff80975407 at lookup+0xc7 #5 0xffffffff809763b0 at namei+0x400 #6 0xffffffff80991cfb at vn_open_cred+0xbb #7 0xffffffff8098c91e at kern_openat+0x20e #8 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #9 0xffffffff80c69467 at Xfast_syscall+0xf7 ino 100281, on dev md5a 0xfffffe0101d0d270: tag ufs, type VDIR usecount 2, writecount 0, refcount 17 mountedhere 0 flags (VI_ACTIVE) v_object 0xfffffe01e2ed1870 ref 0 pages 104 lock type ufs: EXCL by thread 0xfffffe011927b480 (pid 79068, linger4, tid 100170) #0 0xffffffff808bff64 at __lockmgr_args+0xd34 #1 0xffffffff80b34a0b at ffs_lock+0x9b #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff8099032e at _vn_lock+0x8e #4 0xffffffff80975407 at lookup+0xc7 #5 0xffffffff809763b0 at namei+0x400 #6 0xffffffff80991cfb at vn_open_cred+0xbb #7 0xffffffff8098c91e at kern_openat+0x20e #8 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #9 0xffffffff80c69467 at Xfast_syscall+0xf7 ino 100296, on dev md5a 0xfffffe00d3c234e0: tag ufs, type VREG usecount 0, writecount 0, refcount 1 mountedhere 0 flags (VI_DOOMED) lock type ufs: EXCL by thread 0xfffffe011927b480 (pid 79068, linger4, tid 100170) #0 0xffffffff808bf90f at __lockmgr_args+0x6df #1 0xffffffff80b34a0b at ffs_lock+0x9b #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff80984f98 at vnlru_free+0x1f8 #4 0xffffffff809854af at getnewvnode+0x26f #5 0xffffffff80b2ea8f at ffs_vgetf+0xdf #6 0xffffffff80b06281 at ffs_valloc+0x461 #7 0xffffffff80b437bf at ufs_makeinode+0xaf #8 0xffffffff80d1ddf7 at VOP_CREATE_APV+0xa7 #9 0xffffffff80991efe at vn_open_cred+0x2be #10 0xffffffff8098c91e at kern_openat+0x20e #11 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #12 0xffffffff80c69467 at Xfast_syscall+0xf7 ino 164002, on dev md5a 0xfffffe00d3e28750: tag ufs, type VREG usecount 1, writecount 0, refcount 1 mountedhere 0 flags (VI_ACTIVE) lock type ufs: EXCL by thread 0xfffffe0101923480 (pid 79066, linger4, tid 101117) #0 0xffffffff808bf90f at __lockmgr_args+0x6df #1 0xffffffff80b2eb13 at ffs_vgetf+0x163 #2 0xffffffff80b06281 at ffs_valloc+0x461 #3 0xffffffff80b437bf at ufs_makeinode+0xaf #4 0xffffffff80d1ddf7 at VOP_CREATE_APV+0xa7 #5 0xffffffff80991efe at vn_open_cred+0x2be #6 0xffffffff8098c91e at kern_openat+0x20e #7 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #8 0xffffffff80c69467 at Xfast_syscall+0xf7 ino 7284, on dev md5a 0xfffffe01aa5f4750: tag ufs, type VREG usecount 0, writecount 0, refcount 1 mountedhere 0 flags (VI_ACTIVE|VI_OWEINACT) v_object 0xfffffe01b3ab6000 ref 0 pages 0 lock type ufs: EXCL by thread 0xfffffe01aaa76900 (pid 79067, linger4, tid 100301) #0 0xffffffff808bf90f at __lockmgr_args+0x6df #1 0xffffffff80b34a0b at ffs_lock+0x9b #2 0xffffffff80d1cc58 at VOP_LOCK1_APV+0x88 #3 0xffffffff8099032e at _vn_lock+0x8e #4 0xffffffff80990c39 at vn_close+0xf9 #5 0xffffffff80990cd3 at vn_closefile+0x43 #6 0xffffffff80899c73 at _fdrop+0x23 #7 0xffffffff8089c19c at closef+0x5c #8 0xffffffff8089c588 at closefp+0xa8 #9 0xffffffff80c7f2f3 at amd64_syscall+0x2d3 #10 0xffffffff80c69467 at Xfast_syscall+0xf7 ino 7286, on dev md5a db:0:lockedvnods> show mount 0xfffffe000b0bcb58 /dev/ufs/root on / (ufs) 0xfffffe000b0bd000 devfs on /dev (devfs) 0xfffffe000b244790 /dev/ufs/home on /home (ufs) 0xfffffe000b3373c8 /dev/label/tmp on /tmp (ufs) 0xfffffe000b337000 /dev/ufs/usr on /usr (ufs) 0xfffffe000b336b58 /dev/ufs/var on /var (ufs) 0xfffffe000b336790 procfs on /proc (procfs) 0xfffffe000bd1e790 /dev/md5a on /mnt (ufs) More info: show mount db:0:mount> ps pid ppid pgrp uid state wmesg wchan cmd 79083 1312 1309 1001 S nanslp 0xffffffff8135fc28 sleep 79068 78777 78777 1004 R+ CPU 1 linger4 79067 78777 78777 1004 R+ CPU 0 linger4 79066 78777 78777 1004 R+ CPU 3 linger4 79065 78777 78777 1004 RL+ linger4 78777 78776 78777 1004 S+ wait 0xfffffe019a76a000 linger4 78776 78737 72767 0 S+ wait 0xfffffe019a8894a8 su 78771 0 0 0 DL mdwait 0xfffffe000b28b800 [md5] 78737 72767 72767 0 S+ wait 0xfffffe01247104a8 sh 73165 0 0 0 DL crypto_r 0xffffffff81ac6340 [crypto returns] 73164 0 0 0 DL crypto_w 0xffffffff81ac6300 [crypto] 72767 1085 72767 0 S+ wait 0xfffffe01aac02950 sh 1313 1309 1309 1001 S piperd 0xfffffe000b3af5d0 awk 1312 1309 1309 1001 S wait 0xfffffe000bc7d000 sh 1311 1306 1311 1001 Ss+ select 0xfffffe000b22b240 top 1310 1308 1310 1001 Ss kqread 0xfffffe000b38b000 tail 1309 1307 1309 1001 Ss wait 0xfffffe000b130950 sh 1308 1302 1302 1001 S select 0xfffffe000b2080c0 sshd 1307 1301 1301 1001 S select 0xfffffe00081f61c0 sshd 1306 1300 1300 1001 S select 0xfffffe00081f60c0 sshd 1302 942 1302 0 Ss select 0xfffffe00081f6140 sshd 1301 942 1301 0 Ss select 0xfffffe000b222b40 sshd 1300 942 1300 0 Ss select 0xfffffe00081f6ac0 sshd 1085 1082 1085 0 S+ wait 0xfffffe000b1ca4a8 bash 1082 1081 1082 0 S+ pause 0xfffffe000ba26548 csh 1081 1077 1081 1001 S+ wait 0xfffffe000b3584a8 su 1077 1076 1077 1001 Ss+ wait 0xfffffe000b359950 bash 1076 1074 1074 1001 S select 0xfffffe0005442d40 sshd 1074 942 1074 0 Ss select 0xfffffe00081f6c40 sshd 1071 1 1071 0 Ss+ ttyin 0xfffffe00052fb8a8 getty 1070 1 1070 0 Ss+ ttyin 0xfffffe00052fbca8 getty 1069 1 1069 0 Ss+ ttyin 0xfffffe000805d0a8 getty 1068 1 1068 0 Ss+ ttyin 0xfffffe000805d4a8 getty 1067 1 1067 0 Ss+ ttyin 0xfffffe000805d8a8 getty 1066 1 1066 0 Ss+ ttyin 0xfffffe00052fa0a8 getty 1065 1 1065 0 Ss+ ttyin 0xfffffe00052fa4a8 getty 1064 1 1064 0 Ss+ ttyin 0xfffffe00052fa8a8 getty 1063 1 1063 0 Ss+ ttyin 0xfffffe00052faca8 getty 1024 1 1024 0 Ss select 0xfffffe000b2081c0 inetd 991 1 991 0 Ss select 0xfffffe000b2ca6c0 moused 971 1 971 0 Ss nanslp 0xffffffff8135fc28 watchdogd 961 1 961 0 Ss nanslp 0xffffffff8135fc28 cron 954 1 954 25 Ss pause 0xfffffe000b1c39f0 sendmail 950 1 950 0 Ss select 0xfffffe00081f6d40 sendmail 942 1 942 0 Ss select 0xfffffe0005442a40 sshd 851 1 851 0 Ss select 0xfffffe000b2082c0 ntpd 756 755 755 0 S (threaded) nfsd 100120 S rpcsvc 0xfffffe0008f0dd20 nfsd: service 100119 S rpcsvc 0xfffffe0008f0dca0 nfsd: service 100118 S rpcsvc 0xfffffe0008f0dc20 nfsd: service 100114 S rpcsvc 0xfffffe000b0b5220 nfsd: master 755 1 755 0 Ss select 0xfffffe00054429c0 nfsd 746 1 746 0 Ss select 0xfffffe0005442840 mountd 641 1 641 0 Ss select 0xfffffe000b2caa40 rpcbind 615 1 615 0 Ss select 0xfffffe000b22b2c0 syslogd 434 1 434 0 Ss select 0xfffffe000b2cae40 devd 18 0 0 0 RL CPU 2 [softdepflush] 17 0 0 0 DL vlruwt 0xfffffe0008f3d950 [vnlru] 16 0 0 0 DL syncer 0xffffffff8154f860 [syncer] 9 0 0 0 DL psleep 0xffffffff8154f280 [bufdaemon] 8 0 0 0 DL pgzero 0xffffffff815651bc [pagezero] 7 0 0 0 DL psleep 0xffffffff81564370 [vmdaemon] 6 0 0 0 DL psleep 0xffffffff8156434c [pagedaemon] 5 0 0 0 DL ccb_scan 0xffffffff8131da60 [xpt_thrd] 4 0 0 0 DL waiting_ 0xffffffff81555540 [sctp_iterator] 3 0 0 0 DL ctl_work 0xffffff80008b5000 [ctl_thrd] 2 0 0 0 DL - 0xfffffe0008032248 [fdc0] 15 0 0 0 DL (threaded) [usb] 100058 D - 0xffffff80008b0e18 [usbus5] 100057 D - 0xffffff80008b0dc0 [usbus5] 100056 D - 0xffffff80008b0d68 [usbus5] 100055 D - 0xffffff80008b0d10 [usbus5] 100053 D - 0xffffff80008a8460 [usbus4] 100052 D - 0xffffff80008a8408 [usbus4] 100051 D - 0xffffff80008a83b0 [usbus4] 100050 D - 0xffffff80008a8358 [usbus4] 100049 D - 0xffffff80008a5460 [usbus3] 100048 D - 0xffffff80008a5408 [usbus3] 100047 D - 0xffffff80008a53b0 [usbus3] 100046 D - 0xffffff80008a5358 [usbus3] 100045 D - 0xffffff80008a2460 [usbus2] 100044 D - 0xffffff80008a2408 [usbus2] 100043 D - 0xffffff80008a23b0 [usbus2] 100042 D - 0xffffff80008a2358 [usbus2] 100040 D - 0xffffff800089f460 [usbus1] 100039 D - 0xffffff800089f408 [usbus1] 100038 D - 0xffffff800089f3b0 [usbus1] 100037 D - 0xffffff800089f358 [usbus1] 100035 D - 0xffffff800089c460 [usbus0] 100034 D - 0xffffff800089c408 [usbus0] 100033 D - 0xffffff800089c3b0 [usbus0] 100032 D - 0xffffff800089c358 [usbus0] 14 0 0 0 DL - 0xffffffff8135e8c4 [yarrow] 13 0 0 0 DL (threaded) [geom] 100015 D - 0xffffffff81357950 [g_down] 100014 D - 0xffffffff81357948 [g_up] 100013 D - 0xffffffff81357938 [g_event] 12 0 0 0 WL (threaded) [intr] 100063 I [irq12: psm0] 100062 I [irq1: atkbd0] 100060 I [swi0: uart] 100059 I [irq14: ata0] 100054 I [irq19: ehci0] 100041 I [irq18: ohci2 ohci4] 100036 I [irq17: ohci1 ohci3] 100031 I [irq16: hdac1 ohci0] 100030 I [irq22: ahci0] 100029 I [irq257: re0] 100028 I [irq256: hdac0] 100023 I [swi2: cambio] 100022 I [swi6: task queue] 100021 I [swi6: Giant taskq] 100019 I [swi5: fast taskq] 100012 I [swi3: vm] 100011 I [swi1: netisr 0] 100010 I [swi4: clock] 100009 I [swi4: clock] 100008 I [swi4: clock] 100007 I [swi4: clock] 11 0 0 0 RL (threaded) [idle] 100006 CanRun [idle: cpu3] 100005 CanRun [idle: cpu2] 100004 CanRun [idle: cpu1] 100003 CanRun [idle: cpu0] 1 0 1 0 SLs wait 0xfffffe0005213950 [init] 10 0 0 0 DL audit_wo 0xffffffff8155a170 [audit] 0 0 0 0 DLs (threaded) [kernel] 100066 D - 0xffffffff8135e8c4 [deadlkres] 100065 D - 0xfffffe0008051780 [mca taskq] 100027 D - 0xfffffe00053c7200 [acpi_task_2] 100026 D - 0xfffffe00053c7200 [acpi_task_1] 100025 D - 0xfffffe00053c7200 [acpi_task_0] 100024 D - 0xfffffe00053c7280 [kqueue taskq] 100020 D - 0xfffffe00053c7400 [thread taskq] 100018 D - 0xfffffe00053c7500 [ffs_trim taskq] 100016 D - 0xfffffe0002f63900 [firmware taskq] 100000 D sched 0xffffffff81357c60 [swapper] db:0:ps> allt Tracing command sleep pid 79083 tid 100863 td 0xfffffe01aaae3480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247b52850 mi_switch() at mi_switch+0x238/frame 0xffffff8247b528a0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247b528e0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247b52940 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff8247b52970 _sleep() at _sleep+0x3c3/frame 0xffffff8247b52a00 kern_nanosleep() at kern_nanosleep+0x118/frame 0xffffff8247b52a70 sys_nanosleep() at sys_nanosleep+0x6e/frame 0xffffff8247b52ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247b52bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247b52bf0 --- syscall (240, FreeBSD ELF64, sys_nanosleep), rip = 0x800913baa, rsp = 0x7fffffffdb48, rbp = 0x7fffffffdbd0 --- Tracing command linger4 pid 79068 tid 100170 td 0xfffffe011927b480 cpustop_handler() at cpustop_handler+0x2c/frame 0xffffff800023ed00 ipi_nmi_handler() at ipi_nmi_handler+0x3d/frame 0xffffff800023ed20 trap() at trap+0x325/frame 0xffffff800023ef20 nmi_calltrap() at nmi_calltrap+0x8/frame 0xffffff800023ef20 --- trap 0x13, rip = 0xffffffff808c7b9b, rsp = 0xffffff800023efe0, rbp = 0xffffff8247986290 --- __mtx_assert() at __mtx_assert+0x7b/frame 0xffffff8247986290 __mtx_unlock_flags() at __mtx_unlock_flags+0xb2/frame 0xffffff82479862d0 uma_zfree_arg() at uma_zfree_arg+0x83/frame 0xffffff8247986330 vnode_destroy_vobject() at vnode_destroy_vobject+0x8e/frame 0xffffff8247986360 ufs_prepare_reclaim() at ufs_prepare_reclaim+0x22/frame 0xffffff8247986390 ufs_reclaim() at ufs_reclaim+0x30/frame 0xffffff82479863c0 VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0xa6/frame 0xffffff82479863e0 vgonel() at vgonel+0x33d/frame 0xffffff8247986480 vnlru_free() at vnlru_free+0x2fb/frame 0xffffff82479864f0 getnewvnode() at getnewvnode+0x26f/frame 0xffffff8247986530 ffs_vgetf() at ffs_vgetf+0xdf/frame 0xffffff82479865c0 ffs_valloc() at ffs_valloc+0x461/frame 0xffffff8247986660 ufs_makeinode() at ufs_makeinode+0xaf/frame 0xffffff8247986820 VOP_CREATE_APV() at VOP_CREATE_APV+0xa7/frame 0xffffff8247986840 vn_open_cred() at vn_open_cred+0x2be/frame 0xffffff8247986990 kern_openat() at kern_openat+0x20e/frame 0xffffff8247986ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247986bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247986bf0 --- syscall (5, FreeBSD ELF64, sys_open), rip = 0x80092cb9a, rsp = 0x7fffffffd558, rbp = 0x7fffffffd5e0 --- Tracing command linger4 pid 79067 tid 100301 td 0xfffffe01aaa76900 cpustop_handler() at cpustop_handler+0x2c/frame 0xffffffff815759e0 ipi_nmi_handler() at ipi_nmi_handler+0x3d/frame 0xffffffff81575a00 trap() at trap+0x325/frame 0xffffffff81575c00 nmi_calltrap() at nmi_calltrap+0x8/frame 0xffffffff81575c00 --- trap 0x13, rip = 0xffffffff809835a1, rsp = 0xffffffff81575cc0, rbp = 0xffffff8247c158d0 --- vinactive() at vinactive+0x1/frame 0xffffff8247c158d0 vn_close() at vn_close+0xa7/frame 0xffffff8247c15940 vn_closefile() at vn_closefile+0x43/frame 0xffffff8247c159c0 _fdrop() at _fdrop+0x23/frame 0xffffff8247c159e0 closef() at closef+0x5c/frame 0xffffff8247c15a80 closefp() at closefp+0xa8/frame 0xffffff8247c15ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247c15bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247c15bf0 --- syscall (6, FreeBSD ELF64, sys_close), rip = 0x8009348ea, rsp = 0x7fffffffd558, rbp = 0x7fffffffd5e0 --- Tracing command linger4 pid 79066 tid 101117 td 0xfffffe0101923480 kdb_enter() at kdb_enter+0x3b/frame 0xffffff8247c8d1b0 vpanic() at vpanic+0xe1/frame 0xffffff8247c8d1f0 kassert_panic() at kassert_panic+0xd3/frame 0xffffff8247c8d2e0 getblk() at getblk+0x89c/frame 0xffffff8247c8d370 breadn_flags() at breadn_flags+0x40/frame 0xffffff8247c8d3c0 ffs_balloc_ufs2() at ffs_balloc_ufs2+0xc76/frame 0xffffff8247c8d590 ufs_direnter() at ufs_direnter+0x318/frame 0xffffff8247c8d660 ufs_makeinode() at ufs_makeinode+0x359/frame 0xffffff8247c8d820 VOP_CREATE_APV() at VOP_CREATE_APV+0xa7/frame 0xffffff8247c8d840 vn_open_cred() at vn_open_cred+0x2be/frame 0xffffff8247c8d990 kern_openat() at kern_openat+0x20e/frame 0xffffff8247c8dad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247c8dbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247c8dbf0 --- syscall (5, FreeBSD ELF64, sys_open), rip = 0x80092cb9a, rsp = 0x7fffffffd558, rbp = 0x7fffffffd5e0 --- Tracing command linger4 pid 79065 tid 101343 td 0xfffffe0101b4a480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82479ea130 mi_switch() at mi_switch+0x238/frame 0xffffff82479ea180 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82479ea1c0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff82479ea1f0 __lockmgr_args() at __lockmgr_args+0x6ef/frame 0xffffff82479ea2d0 getblk() at getblk+0x11f/frame 0xffffff82479ea360 breadn_flags() at breadn_flags+0x40/frame 0xffffff82479ea3b0 ffs_freefile() at ffs_freefile+0xde/frame 0xffffff82479ea470 handle_workitem_freefile() at handle_workitem_freefile+0xe8/frame 0xffffff82479ea4c0 process_worklist_item() at process_worklist_item+0x418/frame 0xffffff82479ea530 softdep_request_cleanup() at softdep_request_cleanup+0x3de/frame 0xffffff82479ea5c0 ffs_valloc() at ffs_valloc+0xcf/frame 0xffffff82479ea660 ufs_makeinode() at ufs_makeinode+0xaf/frame 0xffffff82479ea820 VOP_CREATE_APV() at VOP_CREATE_APV+0xa7/frame 0xffffff82479ea840 vn_open_cred() at vn_open_cred+0x2be/frame 0xffffff82479ea990 kern_openat() at kern_openat+0x20e/frame 0xffffff82479eaad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82479eabf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82479eabf0 --- syscall (5, FreeBSD ELF64, sys_open), rip = 0x80092cb9a, rsp = 0x7fffffffd558, rbp = 0x7fffffffd5e0 --- Tracing command linger4 pid 78777 tid 101352 td 0xfffffe01aaa76000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247a58670 mi_switch() at mi_switch+0x238/frame 0xffffff8247a586c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247a58700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247a58760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247a58780 _sleep() at _sleep+0x37d/frame 0xffffff8247a58810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff8247a588b0 kern_wait() at kern_wait+0x9c/frame 0xffffff8247a58a10 sys_wait4() at sys_wait4+0x35/frame 0xffffff8247a58ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247a58bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247a58bf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x80089a2aa, rsp = 0x7fffffffd698, rbp = 0 --- Tracing command su pid 78776 tid 101354 td 0xfffffe0101869000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247a7b670 mi_switch() at mi_switch+0x238/frame 0xffffff8247a7b6c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247a7b700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247a7b760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247a7b780 _sleep() at _sleep+0x37d/frame 0xffffff8247a7b810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff8247a7b8b0 kern_wait() at kern_wait+0x9c/frame 0xffffff8247a7ba10 sys_wait4() at sys_wait4+0x35/frame 0xffffff8247a7bad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247a7bbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247a7bbf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x800ed42aa, rsp = 0x7fffffffd158, rbp = 0x133b9 --- Tracing command md5 pid 78771 tid 100174 td 0xfffffe000bd96480 sched_switch() at sched_switch+0x1b4/frame 0xffffff824799aa10 mi_switch() at mi_switch+0x238/frame 0xffffff824799aa60 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824799aaa0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff824799aad0 _sleep() at _sleep+0x3e9/frame 0xffffff824799ab60 md_kthread() at md_kthread+0x17b/frame 0xffffff824799aba0 fork_exit() at fork_exit+0x139/frame 0xffffff824799abf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff824799abf0 --- trap 0, rip = 0, rsp = 0xffffff824799acb0, rbp = 0 --- Tracing command sh pid 78737 tid 100759 td 0xfffffe010191c900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247c6a670 mi_switch() at mi_switch+0x238/frame 0xffffff8247c6a6c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247c6a700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247c6a760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247c6a780 _sleep() at _sleep+0x37d/frame 0xffffff8247c6a810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff8247c6a8b0 kern_wait() at kern_wait+0x9c/frame 0xffffff8247c6aa10 sys_wait4() at sys_wait4+0x35/frame 0xffffff8247c6aad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247c6abf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247c6abf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x800d302aa, rsp = 0x7fffffffd538, rbp = 0x1 --- Tracing command crypto returns pid 73165 tid 100280 td 0xfffffe015b9c4000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247baca10 mi_switch() at mi_switch+0x238/frame 0xffffff8247baca60 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247bacaa0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8247bacad0 _sleep() at _sleep+0x3e9/frame 0xffffff8247bacb60 crypto_ret_proc() at crypto_ret_proc+0x19a/frame 0xffffff8247bacba0 fork_exit() at fork_exit+0x139/frame 0xffffff8247bacbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8247bacbf0 --- trap 0, rip = 0, rsp = 0xffffff8247baccb0, rbp = 0 --- Tracing command crypto pid 73164 tid 100198 td 0xfffffe01246c3000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247a12a20 mi_switch() at mi_switch+0x238/frame 0xffffff8247a12a70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247a12ab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8247a12ae0 _sleep() at _sleep+0x3e9/frame 0xffffff8247a12b70 crypto_proc() at crypto_proc+0x1cd/frame 0xffffff8247a12ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8247a12bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8247a12bf0 --- trap 0, rip = 0, rsp = 0xffffff8247a12cb0, rbp = 0 --- Tracing command sh pid 72767 tid 100292 td 0xfffffe015bbe9000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247be8670 mi_switch() at mi_switch+0x238/frame 0xffffff8247be86c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247be8700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247be8760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247be8780 _sleep() at _sleep+0x37d/frame 0xffffff8247be8810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff8247be88b0 kern_wait() at kern_wait+0x9c/frame 0xffffff8247be8a10 sys_wait4() at sys_wait4+0x35/frame 0xffffff8247be8ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247be8bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247be8bf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x800d302aa, rsp = 0x7fffffffd3f8, rbp = 0x1 --- Tracing command awk pid 1313 tid 100169 td 0xfffffe011927b900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247981800 mi_switch() at mi_switch+0x238/frame 0xffffff8247981850 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247981890 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82479818f0 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247981910 _sleep() at _sleep+0x37d/frame 0xffffff82479819a0 pipe_read() at pipe_read+0x432/frame 0xffffff82479819f0 dofileread() at dofileread+0xa1/frame 0xffffff8247981a40 kern_readv() at kern_readv+0x6c/frame 0xffffff8247981a80 sys_read() at sys_read+0x64/frame 0xffffff8247981ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247981bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247981bf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b7792a, rsp = 0x7fffffffd848, rbp = 0x800db5d60 --- Tracing command sh pid 1312 tid 100168 td 0xfffffe011927f000 sched_switch() at sched_switch+0x1b4/frame 0xffffff824797c670 mi_switch() at mi_switch+0x238/frame 0xffffff824797c6c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824797c700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824797c760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824797c780 _sleep() at _sleep+0x37d/frame 0xffffff824797c810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff824797c8b0 kern_wait() at kern_wait+0x9c/frame 0xffffff824797ca10 sys_wait4() at sys_wait4+0x35/frame 0xffffff824797cad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824797cbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824797cbf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x800d302aa, rsp = 0x7fffffffd678, rbp = 0x1 --- Tracing command top pid 1311 tid 100148 td 0xfffffe000b37c000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247918680 mi_switch() at mi_switch+0x238/frame 0xffffff82479186d0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247918710 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247918770 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff82479187a0 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff8247918800 seltdwait() at seltdwait+0x57/frame 0xffffff8247918830 kern_select() at kern_select+0x79f/frame 0xffffff8247918a80 sys_select() at sys_select+0x5d/frame 0xffffff8247918ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247918bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247918bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x800fb98aa, rsp = 0x7fffffffd988, rbp = 0xe --- Tracing command tail pid 1310 tid 100137 td 0xfffffe000bb87000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82478e1680 mi_switch() at mi_switch+0x238/frame 0xffffff82478e16d0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478e1710 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82478e1770 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff82478e17a0 _sleep() at _sleep+0x3c3/frame 0xffffff82478e1830 kern_kevent() at kern_kevent+0x33a/frame 0xffffff82478e1a10 sys_kevent() at sys_kevent+0x90/frame 0xffffff82478e1ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82478e1bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82478e1bf0 --- syscall (363, FreeBSD ELF64, sys_kevent), rip = 0x800918b2a, rsp = 0x7fffffffd9e8, rbp = 0x8010060a8 --- Tracing command sh pid 1309 tid 100077 td 0xfffffe000b136900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477b5670 mi_switch() at mi_switch+0x238/frame 0xffffff82477b56c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477b5700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477b5760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477b5780 _sleep() at _sleep+0x37d/frame 0xffffff82477b5810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff82477b58b0 kern_wait() at kern_wait+0x9c/frame 0xffffff82477b5a10 sys_wait4() at sys_wait4+0x35/frame 0xffffff82477b5ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477b5bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477b5bf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x800d302aa, rsp = 0x7fffffffd938, rbp = 0x1 --- Tracing command sshd pid 1308 tid 100097 td 0xfffffe000b134480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247819690 mi_switch() at mi_switch+0x238/frame 0xffffff82478196e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247819720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247819780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478197a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247819800 seltdwait() at seltdwait+0xad/frame 0xffffff8247819830 kern_select() at kern_select+0x79f/frame 0xffffff8247819a80 sys_select() at sys_select+0x5d/frame 0xffffff8247819ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247819bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247819bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x80255f8aa, rsp = 0x7fffffffcc58, rbp = 0x7fffffffcce0 --- Tracing command sshd pid 1307 tid 100167 td 0xfffffe011927f480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247977690 mi_switch() at mi_switch+0x238/frame 0xffffff82479776e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247977720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247977780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82479777a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247977800 seltdwait() at seltdwait+0xad/frame 0xffffff8247977830 kern_select() at kern_select+0x79f/frame 0xffffff8247977a80 sys_select() at sys_select+0x5d/frame 0xffffff8247977ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247977bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247977bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x80255f8aa, rsp = 0x7fffffffcc58, rbp = 0x7fffffffcce0 --- Tracing command sshd pid 1306 tid 100166 td 0xfffffe011927f900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247972690 mi_switch() at mi_switch+0x238/frame 0xffffff82479726e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247972720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247972780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82479727a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247972800 seltdwait() at seltdwait+0xad/frame 0xffffff8247972830 kern_select() at kern_select+0x79f/frame 0xffffff8247972a80 sys_select() at sys_select+0x5d/frame 0xffffff8247972ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247972bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247972bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x80255f8aa, rsp = 0x7fffffffcc58, rbp = 0x7fffffffcce0 --- Tracing command sshd pid 1302 tid 100100 td 0xfffffe000b1d2000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247828770 mi_switch() at mi_switch+0x238/frame 0xffffff82478287c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247828800 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247828860 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247828880 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82478288e0 seltdwait() at seltdwait+0xad/frame 0xffffff8247828910 sys_poll() at sys_poll+0x28a/frame 0xffffff8247828ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247828bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247828bf0 --- syscall (209, FreeBSD ELF64, sys_poll), rip = 0x8024f950a, rsp = 0x7fffffffccf8, rbp = 0x803c22190 --- Tracing command sshd pid 1301 tid 100131 td 0xfffffe000bb88480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82478c3770 mi_switch() at mi_switch+0x238/frame 0xffffff82478c37c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478c3800 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82478c3860 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478c3880 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82478c38e0 seltdwait() at seltdwait+0xad/frame 0xffffff82478c3910 sys_poll() at sys_poll+0x28a/frame 0xffffff82478c3ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82478c3bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82478c3bf0 --- syscall (209, FreeBSD ELF64, sys_poll), rip = 0x8024f950a, rsp = 0x7fffffffccf8, rbp = 0x803c22190 --- Tracing command sshd pid 1300 tid 100123 td 0xfffffe000b155000 sched_switch() at sched_switch+0x1b4/frame 0xffffff824789b770 mi_switch() at mi_switch+0x238/frame 0xffffff824789b7c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824789b800 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824789b860 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824789b880 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff824789b8e0 seltdwait() at seltdwait+0xad/frame 0xffffff824789b910 sys_poll() at sys_poll+0x28a/frame 0xffffff824789bad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824789bbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824789bbf0 --- syscall (209, FreeBSD ELF64, sys_poll), rip = 0x8024f950a, rsp = 0x7fffffffccf8, rbp = 0x803c22190 --- Tracing command bash pid 1085 tid 100087 td 0xfffffe000b1d3480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477e7670 mi_switch() at mi_switch+0x238/frame 0xffffff82477e76c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477e7700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477e7760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477e7780 _sleep() at _sleep+0x37d/frame 0xffffff82477e7810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff82477e78b0 kern_wait() at kern_wait+0x9c/frame 0xffffff82477e7a10 sys_wait4() at sys_wait4+0x35/frame 0xffffff82477e7ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477e7bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477e7bf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x8010a12aa, rsp = 0x7fffffffd678, rbp = 0x801911ea0 --- Tracing command csh pid 1082 tid 100129 td 0xfffffe0008f45480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82478b98b0 mi_switch() at mi_switch+0x238/frame 0xffffff82478b9900 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478b9940 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82478b99a0 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478b99c0 _sleep() at _sleep+0x37d/frame 0xffffff82478b9a50 kern_sigsuspend() at kern_sigsuspend+0xaa/frame 0xffffff82478b9aa0 sys_sigsuspend() at sys_sigsuspend+0x34/frame 0xffffff82478b9ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82478b9bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82478b9bf0 --- syscall (4, FreeBSD ELF64, sys_write), rip = 0x800d4509a, rsp = 0x7fffffffcda8, rbp = 0x801838400 --- Tracing command su pid 1081 tid 100111 td 0xfffffe0008f41480 sched_switch() at sched_switch+0x1b4/frame 0xffffff824785f670 mi_switch() at mi_switch+0x238/frame 0xffffff824785f6c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824785f700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824785f760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824785f780 _sleep() at _sleep+0x37d/frame 0xffffff824785f810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff824785f8b0 kern_wait() at kern_wait+0x9c/frame 0xffffff824785fa10 sys_wait4() at sys_wait4+0x35/frame 0xffffff824785fad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824785fbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824785fbf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x800ed42aa, rsp = 0x7fffffffd478, rbp = 0x43a --- Tracing command bash pid 1077 tid 100107 td 0xfffffe0008f42900 sched_switch() at sched_switch+0x1b4/frame 0xffffff824784b670 mi_switch() at mi_switch+0x238/frame 0xffffff824784b6c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824784b700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824784b760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824784b780 _sleep() at _sleep+0x37d/frame 0xffffff824784b810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff824784b8b0 kern_wait() at kern_wait+0x9c/frame 0xffffff824784ba10 sys_wait4() at sys_wait4+0x35/frame 0xffffff824784bad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824784bbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824784bbf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x8010a12aa, rsp = 0x7fffffffd8c8, rbp = 0x801908d40 --- Tracing command sshd pid 1076 tid 100128 td 0xfffffe0008f45900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82478b4690 mi_switch() at mi_switch+0x238/frame 0xffffff82478b46e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478b4720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82478b4780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478b47a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82478b4800 seltdwait() at seltdwait+0xad/frame 0xffffff82478b4830 kern_select() at kern_select+0x79f/frame 0xffffff82478b4a80 sys_select() at sys_select+0x5d/frame 0xffffff82478b4ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82478b4bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82478b4bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x80255f8aa, rsp = 0x7fffffffcc58, rbp = 0x7fffffffcce0 --- Tracing command sshd pid 1074 tid 100108 td 0xfffffe0008f42480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247850770 mi_switch() at mi_switch+0x238/frame 0xffffff82478507c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247850800 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247850860 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247850880 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82478508e0 seltdwait() at seltdwait+0xad/frame 0xffffff8247850910 sys_poll() at sys_poll+0x28a/frame 0xffffff8247850ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247850bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247850bf0 --- syscall (209, FreeBSD ELF64, sys_poll), rip = 0x8024f950a, rsp = 0x7fffffffccf8, rbp = 0x803c22190 --- Tracing command getty pid 1071 tid 100105 td 0xfffffe000b22c480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247841710 mi_switch() at mi_switch+0x238/frame 0xffffff8247841760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478417a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247841800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247841820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247841880 tty_wait() at tty_wait+0x4c/frame 0xffffff82478418b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff8247841950 ttydev_read() at ttydev_read+0x95/frame 0xffffff8247841980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82478419f0 dofileread() at dofileread+0xa1/frame 0xffffff8247841a40 kern_readv() at kern_readv+0x6c/frame 0xffffff8247841a80 sys_read() at sys_read+0x64/frame 0xffffff8247841ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247841bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247841bf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1070 tid 100076 td 0xfffffe0008f44000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477b0710 mi_switch() at mi_switch+0x238/frame 0xffffff82477b0760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477b07a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477b0800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477b0820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477b0880 tty_wait() at tty_wait+0x4c/frame 0xffffff82477b08b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff82477b0950 ttydev_read() at ttydev_read+0x95/frame 0xffffff82477b0980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82477b09f0 dofileread() at dofileread+0xa1/frame 0xffffff82477b0a40 kern_readv() at kern_readv+0x6c/frame 0xffffff82477b0a80 sys_read() at sys_read+0x64/frame 0xffffff82477b0ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477b0bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477b0bf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1069 tid 100094 td 0xfffffe000b22e900 sched_switch() at sched_switch+0x1b4/frame 0xffffff824780a710 mi_switch() at mi_switch+0x238/frame 0xffffff824780a760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824780a7a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824780a800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824780a820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff824780a880 tty_wait() at tty_wait+0x4c/frame 0xffffff824780a8b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff824780a950 ttydev_read() at ttydev_read+0x95/frame 0xffffff824780a980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff824780a9f0 dofileread() at dofileread+0xa1/frame 0xffffff824780aa40 kern_readv() at kern_readv+0x6c/frame 0xffffff824780aa80 sys_read() at sys_read+0x64/frame 0xffffff824780aad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824780abf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824780abf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1068 tid 100116 td 0xfffffe000b37d480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247878710 mi_switch() at mi_switch+0x238/frame 0xffffff8247878760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478787a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247878800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247878820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247878880 tty_wait() at tty_wait+0x4c/frame 0xffffff82478788b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff8247878950 ttydev_read() at ttydev_read+0x95/frame 0xffffff8247878980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82478789f0 dofileread() at dofileread+0xa1/frame 0xffffff8247878a40 kern_readv() at kern_readv+0x6c/frame 0xffffff8247878a80 sys_read() at sys_read+0x64/frame 0xffffff8247878ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247878bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247878bf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1067 tid 100106 td 0xfffffe000b22c000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247846710 mi_switch() at mi_switch+0x238/frame 0xffffff8247846760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478467a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247846800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247846820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247846880 tty_wait() at tty_wait+0x4c/frame 0xffffff82478468b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff8247846950 ttydev_read() at ttydev_read+0x95/frame 0xffffff8247846980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82478469f0 dofileread() at dofileread+0xa1/frame 0xffffff8247846a40 kern_readv() at kern_readv+0x6c/frame 0xffffff8247846a80 sys_read() at sys_read+0x64/frame 0xffffff8247846ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247846bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247846bf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1066 tid 100082 td 0xfffffe0008f43000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477ce710 mi_switch() at mi_switch+0x238/frame 0xffffff82477ce760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477ce7a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477ce800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477ce820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477ce880 tty_wait() at tty_wait+0x4c/frame 0xffffff82477ce8b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff82477ce950 ttydev_read() at ttydev_read+0x95/frame 0xffffff82477ce980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82477ce9f0 dofileread() at dofileread+0xa1/frame 0xffffff82477cea40 kern_readv() at kern_readv+0x6c/frame 0xffffff82477cea80 sys_read() at sys_read+0x64/frame 0xffffff82477cead0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477cebf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477cebf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1065 tid 100078 td 0xfffffe000b136480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477ba710 mi_switch() at mi_switch+0x238/frame 0xffffff82477ba760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477ba7a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477ba800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477ba820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477ba880 tty_wait() at tty_wait+0x4c/frame 0xffffff82477ba8b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff82477ba950 ttydev_read() at ttydev_read+0x95/frame 0xffffff82477ba980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82477ba9f0 dofileread() at dofileread+0xa1/frame 0xffffff82477baa40 kern_readv() at kern_readv+0x6c/frame 0xffffff82477baa80 sys_read() at sys_read+0x64/frame 0xffffff82477baad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477babf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477babf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1064 tid 100085 td 0xfffffe000b1d4000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477dd710 mi_switch() at mi_switch+0x238/frame 0xffffff82477dd760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477dd7a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477dd800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477dd820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477dd880 tty_wait() at tty_wait+0x4c/frame 0xffffff82477dd8b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff82477dd950 ttydev_read() at ttydev_read+0x95/frame 0xffffff82477dd980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82477dd9f0 dofileread() at dofileread+0xa1/frame 0xffffff82477dda40 kern_readv() at kern_readv+0x6c/frame 0xffffff82477dda80 sys_read() at sys_read+0x64/frame 0xffffff82477ddad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477ddbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477ddbf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command getty pid 1063 tid 100099 td 0xfffffe000b22e000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247823710 mi_switch() at mi_switch+0x238/frame 0xffffff8247823760 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82478237a0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247823800 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8247823820 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247823880 tty_wait() at tty_wait+0x4c/frame 0xffffff82478238b0 ttydisc_read() at ttydisc_read+0x38e/frame 0xffffff8247823950 ttydev_read() at ttydev_read+0x95/frame 0xffffff8247823980 devfs_read_f() at devfs_read_f+0x90/frame 0xffffff82478239f0 dofileread() at dofileread+0xa1/frame 0xffffff8247823a40 kern_readv() at kern_readv+0x6c/frame 0xffffff8247823a80 sys_read() at sys_read+0x64/frame 0xffffff8247823ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247823bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247823bf0 --- syscall (3, FreeBSD ELF64, sys_read), rip = 0x800b4b92a, rsp = 0x7fffffffdc98, rbp = 0 --- Tracing command inetd pid 1024 tid 100090 td 0xfffffe000b1d2900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477f6690 mi_switch() at mi_switch+0x238/frame 0xffffff82477f66e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477f6720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477f6780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477f67a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477f6800 seltdwait() at seltdwait+0xad/frame 0xffffff82477f6830 kern_select() at kern_select+0x79f/frame 0xffffff82477f6a80 sys_select() at sys_select+0x5d/frame 0xffffff82477f6ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477f6bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477f6bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x800f608aa, rsp = 0x7fffffffcd68, rbp = 0x1 --- Tracing command moused pid 991 tid 100079 td 0xfffffe000b136000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477bf690 mi_switch() at mi_switch+0x238/frame 0xffffff82477bf6e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477bf720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477bf780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477bf7a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477bf800 seltdwait() at seltdwait+0xad/frame 0xffffff82477bf830 kern_select() at kern_select+0x79f/frame 0xffffff82477bfa80 sys_select() at sys_select+0x5d/frame 0xffffff82477bfad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477bfbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477bfbf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x800d728aa, rsp = 0x7fffffffd858, rbp = 0x7fffffffdf31 --- Tracing command watchdogd pid 971 tid 100095 td 0xfffffe000b22e480 sched_switch() at sched_switch+0x1b4/frame 0xffffff824780f850 mi_switch() at mi_switch+0x238/frame 0xffffff824780f8a0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824780f8e0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824780f940 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff824780f970 _sleep() at _sleep+0x3c3/frame 0xffffff824780fa00 kern_nanosleep() at kern_nanosleep+0x118/frame 0xffffff824780fa70 sys_nanosleep() at sys_nanosleep+0x6e/frame 0xffffff824780fad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824780fbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824780fbf0 --- syscall (240, FreeBSD ELF64, sys_nanosleep), rip = 0x800b27baa, rsp = 0x7fffffffdba8, rbp = 0x7fffffffdbf0 --- Tracing command cron pid 961 tid 100117 td 0xfffffe000b37d000 sched_switch() at sched_switch+0x1b4/frame 0xffffff824787d850 mi_switch() at mi_switch+0x238/frame 0xffffff824787d8a0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824787d8e0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824787d940 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff824787d970 _sleep() at _sleep+0x3c3/frame 0xffffff824787da00 kern_nanosleep() at kern_nanosleep+0x118/frame 0xffffff824787da70 sys_nanosleep() at sys_nanosleep+0x6e/frame 0xffffff824787dad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824787dbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824787dbf0 --- syscall (240, FreeBSD ELF64, sys_nanosleep), rip = 0x800d39baa, rsp = 0x7fffffffdac8, rbp = 0x3c --- Tracing command sendmail pid 954 tid 100101 td 0xfffffe000b22d900 sched_switch() at sched_switch+0x1b4/frame 0xffffff824782d8b0 mi_switch() at mi_switch+0x238/frame 0xffffff824782d900 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824782d940 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824782d9a0 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824782d9c0 _sleep() at _sleep+0x37d/frame 0xffffff824782da50 kern_sigsuspend() at kern_sigsuspend+0xaa/frame 0xffffff824782daa0 sys_sigsuspend() at sys_sigsuspend+0x34/frame 0xffffff824782dad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824782dbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824782dbf0 --- syscall (4, FreeBSD ELF64, sys_write), rip = 0x80139609a, rsp = 0x7fffffffbda8, rbp = 0x1 --- Tracing command sendmail pid 950 tid 100093 td 0xfffffe000b135000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247805680 mi_switch() at mi_switch+0x238/frame 0xffffff82478056d0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247805710 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247805770 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff82478057a0 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff8247805800 seltdwait() at seltdwait+0x57/frame 0xffffff8247805830 kern_select() at kern_select+0x79f/frame 0xffffff8247805a80 sys_select() at sys_select+0x5d/frame 0xffffff8247805ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247805bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247805bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x8014498aa, rsp = 0x7fffffffb218, rbp = 0x7fffffffb2b0 --- Tracing command sshd pid 942 tid 100088 td 0xfffffe000b22f000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477ec690 mi_switch() at mi_switch+0x238/frame 0xffffff82477ec6e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477ec720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477ec780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82477ec7a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff82477ec800 seltdwait() at seltdwait+0xad/frame 0xffffff82477ec830 kern_select() at kern_select+0x79f/frame 0xffffff82477eca80 sys_select() at sys_select+0x5d/frame 0xffffff82477ecad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477ecbf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477ecbf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x80255f8aa, rsp = 0x7fffffffcd78, rbp = 0x2 --- Tracing command ntpd pid 851 tid 100098 td 0xfffffe000b1d2480 sched_switch() at sched_switch+0x1b4/frame 0xffffff824781e690 mi_switch() at mi_switch+0x238/frame 0xffffff824781e6e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824781e720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824781e780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff824781e7a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff824781e800 seltdwait() at seltdwait+0xad/frame 0xffffff824781e830 kern_select() at kern_select+0x79f/frame 0xffffff824781ea80 sys_select() at sys_select+0x5d/frame 0xffffff824781ead0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824781ebf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824781ebf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x8013a98aa, rsp = 0x7fffffffdbc8, rbp = 0x7fffffffdd08 --- Tracing command nfsd pid 756 tid 100120 td 0xfffffe000b131480 sched_switch() at sched_switch+0x1b4/frame 0xffffff824788c8e0 mi_switch() at mi_switch+0x238/frame 0xffffff824788c930 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824788c970 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824788c9d0 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff824788ca00 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff824788ca60 svc_run_internal() at svc_run_internal+0x895/frame 0xffffff824788cb90 svc_thread_start() at svc_thread_start+0xb/frame 0xffffff824788cba0 fork_exit() at fork_exit+0x139/frame 0xffffff824788cbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff824788cbf0 --- trap 0xc, rip = 0x800885cfa, rsp = 0x7fffffffd678, rbp = 0x5 --- Tracing command nfsd pid 756 tid 100119 td 0xfffffe000b131900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82478878e0 mi_switch() at mi_switch+0x238/frame 0xffffff8247887930 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247887970 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82478879d0 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff8247887a00 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff8247887a60 svc_run_internal() at svc_run_internal+0x895/frame 0xffffff8247887b90 svc_thread_start() at svc_thread_start+0xb/frame 0xffffff8247887ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8247887bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8247887bf0 --- trap 0xc, rip = 0x800885cfa, rsp = 0x7fffffffd678, rbp = 0x5 --- Tracing command nfsd pid 756 tid 100118 td 0xfffffe000b134000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82478828e0 mi_switch() at mi_switch+0x238/frame 0xffffff8247882930 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247882970 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82478829d0 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff8247882a00 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff8247882a60 svc_run_internal() at svc_run_internal+0x895/frame 0xffffff8247882b90 svc_thread_start() at svc_thread_start+0xb/frame 0xffffff8247882ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8247882bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8247882bf0 --- trap 0xc, rip = 0x800885cfa, rsp = 0x7fffffffd678, rbp = 0x5 --- Tracing command nfsd pid 756 tid 100114 td 0xfffffe000b37e000 sched_switch() at sched_switch+0x1b4/frame 0xffffff824786e070 mi_switch() at mi_switch+0x238/frame 0xffffff824786e0c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff824786e100 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff824786e160 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff824786e190 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff824786e1f0 svc_run_internal() at svc_run_internal+0x895/frame 0xffffff824786e320 svc_run() at svc_run+0x94/frame 0xffffff824786e340 nfsrvd_nfsd() at nfsrvd_nfsd+0x1c7/frame 0xffffff824786e490 nfssvc_nfsd() at nfssvc_nfsd+0x9b/frame 0xffffff824786eab0 sys_nfssvc() at sys_nfssvc+0xb0/frame 0xffffff824786ead0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff824786ebf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff824786ebf0 --- syscall (155, FreeBSD ELF64, sys_nfssvc), rip = 0x800885cfa, rsp = 0x7fffffffd678, rbp = 0x5 --- Tracing command nfsd pid 755 tid 100112 td 0xfffffe0008f41000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247864690 mi_switch() at mi_switch+0x238/frame 0xffffff82478646e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247864720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247864780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478647a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247864800 seltdwait() at seltdwait+0xad/frame 0xffffff8247864830 kern_select() at kern_select+0x79f/frame 0xffffff8247864a80 sys_select() at sys_select+0x5d/frame 0xffffff8247864ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247864bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247864bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x8009388aa, rsp = 0x7fffffffd928, rbp = 0x7fffffffdc00 --- Tracing command mountd pid 746 tid 100102 td 0xfffffe000b22d480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247832690 mi_switch() at mi_switch+0x238/frame 0xffffff82478326e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247832720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247832780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478327a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247832800 seltdwait() at seltdwait+0xad/frame 0xffffff8247832830 kern_select() at kern_select+0x79f/frame 0xffffff8247832a80 sys_select() at sys_select+0x5d/frame 0xffffff8247832ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247832bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247832bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x800b508aa, rsp = 0x7fffffffdb78, rbp = 0x801419060 --- Tracing command rpcbind pid 641 tid 100083 td 0xfffffe000b1d4900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477d3760 mi_switch() at mi_switch+0x238/frame 0xffffff82477d37b0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477d37f0 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477d3850 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff82477d3880 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff82477d38e0 seltdwait() at seltdwait+0x57/frame 0xffffff82477d3910 sys_poll() at sys_poll+0x28a/frame 0xffffff82477d3ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477d3bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477d3bf0 --- syscall (209, FreeBSD ELF64, sys_poll), rip = 0x800cf350a, rsp = 0x7fffffffba48, rbp = 0x80141b020 --- Tracing command syslogd pid 615 tid 100092 td 0xfffffe000b135480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8247800690 mi_switch() at mi_switch+0x238/frame 0xffffff82478006e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8247800720 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8247800780 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff82478007a0 _cv_wait_sig() at _cv_wait_sig+0x181/frame 0xffffff8247800800 seltdwait() at seltdwait+0xad/frame 0xffffff8247800830 kern_select() at kern_select+0x79f/frame 0xffffff8247800a80 sys_select() at sys_select+0x5d/frame 0xffffff8247800ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8247800bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8247800bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x800b4e8aa, rsp = 0x7fffffffd118, rbp = 0x80142c108 --- Tracing command devd pid 434 tid 100084 td 0xfffffe000b1d4480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82477d8680 mi_switch() at mi_switch+0x238/frame 0xffffff82477d86d0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82477d8710 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff82477d8770 sleepq_timedwait_sig() at sleepq_timedwait_sig+0x19/frame 0xffffff82477d87a0 _cv_timedwait_sig() at _cv_timedwait_sig+0x18f/frame 0xffffff82477d8800 seltdwait() at seltdwait+0x57/frame 0xffffff82477d8830 kern_select() at kern_select+0x79f/frame 0xffffff82477d8a80 sys_select() at sys_select+0x5d/frame 0xffffff82477d8ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff82477d8bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff82477d8bf0 --- syscall (93, FreeBSD ELF64, sys_select), rip = 0x442aaa, rsp = 0x7fffffffd828, rbp = 0x10 --- Tracing command softdepflush pid 18 tid 100075 td 0xfffffe0008f44480 cpustop_handler() at cpustop_handler+0x2c/frame 0xffffff8000245d00 ipi_nmi_handler() at ipi_nmi_handler+0x3d/frame 0xffffff8000245d20 trap() at trap+0x325/frame 0xffffff8000245f20 nmi_calltrap() at nmi_calltrap+0x8/frame 0xffffff8000245f20 --- trap 0x13, rip = 0xffffffff80b50947, rsp = 0xffffff8000245fe0, rbp = 0xffffff8234c44a30 --- uma_zfree_arg() at uma_zfree_arg+0x17/frame 0xffffff8234c44a30 free() at free+0xb3/frame 0xffffff8234c44a60 handle_workitem_freefile() at handle_workitem_freefile+0x121/frame 0xffffff8234c44ab0 process_worklist_item() at process_worklist_item+0x418/frame 0xffffff8234c44b20 softdep_process_worklist() at softdep_process_worklist+0x81/frame 0xffffff8234c44b60 softdep_flush() at softdep_flush+0xf0/frame 0xffffff8234c44ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c44bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c44bf0 --- trap 0, rip = 0, rsp = 0xffffff8234c44cb0, rbp = 0 --- Tracing command vnlru pid 17 tid 100074 td 0xfffffe0008f44900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234c3f9d0 mi_switch() at mi_switch+0x238/frame 0xffffff8234c3fa20 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234c3fa60 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234c3fa90 _sleep() at _sleep+0x29a/frame 0xffffff8234c3fb20 vnlru_proc() at vnlru_proc+0x537/frame 0xffffff8234c3fba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c3fbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c3fbf0 --- trap 0, rip = 0, rsp = 0xffffff8234c3fcb0, rbp = 0 --- Tracing command syncer pid 16 tid 100073 td 0xfffffe0008f45000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234c3a9f0 mi_switch() at mi_switch+0x238/frame 0xffffff8234c3aa40 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234c3aa80 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234c3aab0 _cv_timedwait() at _cv_timedwait+0x18f/frame 0xffffff8234c3ab10 sched_sync() at sched_sync+0x4ee/frame 0xffffff8234c3aba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c3abf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c3abf0 --- trap 0, rip = 0, rsp = 0xffffff8234c3acb0, rbp = 0 --- Tracing command bufdaemon pid 9 tid 100072 td 0xfffffe0008042000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234c35a30 mi_switch() at mi_switch+0x238/frame 0xffffff8234c35a80 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234c35ac0 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234c35af0 _sleep() at _sleep+0x29a/frame 0xffffff8234c35b80 buf_daemon() at buf_daemon+0x192/frame 0xffffff8234c35ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c35bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c35bf0 --- trap 0, rip = 0, rsp = 0xffffff8234c35cb0, rbp = 0 --- Tracing command pagezero pid 8 tid 100071 td 0xfffffe0008042480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234c30a30 mi_switch() at mi_switch+0x238/frame 0xffffff8234c30a80 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234c30ac0 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234c30af0 _sleep() at _sleep+0x29a/frame 0xffffff8234c30b80 vm_pagezero() at vm_pagezero+0x73/frame 0xffffff8234c30ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c30bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c30bf0 --- trap 0, rip = 0, rsp = 0xffffff8234c30cb0, rbp = 0 --- Tracing command vmdaemon pid 7 tid 100070 td 0xfffffe0008042900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234c2b9e0 mi_switch() at mi_switch+0x238/frame 0xffffff8234c2ba30 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234c2ba70 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234c2baa0 _sleep() at _sleep+0x3e9/frame 0xffffff8234c2bb30 vm_daemon() at vm_daemon+0x4d/frame 0xffffff8234c2bba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c2bbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c2bbf0 --- trap 0, rip = 0, rsp = 0xffffff8234c2bcb0, rbp = 0 --- Tracing command pagedaemon pid 6 tid 100069 td 0xfffffe0008045000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234c26940 mi_switch() at mi_switch+0x238/frame 0xffffff8234c26990 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234c269d0 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234c26a00 _sleep() at _sleep+0x29a/frame 0xffffff8234c26a90 vm_pageout() at vm_pageout+0xb8f/frame 0xffffff8234c26ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234c26bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234c26bf0 --- trap 0, rip = 0, rsp = 0xffffff8234c26cb0, rbp = 0 --- Tracing command xpt_thrd pid 5 tid 100068 td 0xfffffe0008045480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234aa1a30 mi_switch() at mi_switch+0x238/frame 0xffffff8234aa1a80 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234aa1ac0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234aa1af0 _sleep() at _sleep+0x3e9/frame 0xffffff8234aa1b80 xpt_scanner_thread() at xpt_scanner_thread+0xdd/frame 0xffffff8234aa1ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234aa1bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234aa1bf0 --- trap 0, rip = 0, rsp = 0xffffff8234aa1cb0, rbp = 0 --- Tracing command sctp_iterator pid 4 tid 100067 td 0xfffffe0008045900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a9ca40 mi_switch() at mi_switch+0x238/frame 0xffffff8234a9ca90 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a9cad0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a9cb00 _sleep() at _sleep+0x3e9/frame 0xffffff8234a9cb90 sctp_iterator_thread() at sctp_iterator_thread+0x3f/frame 0xffffff8234a9cba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a9cbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a9cbf0 --- trap 0, rip = 0, rsp = 0xffffff8234a9ccb0, rbp = 0 --- Tracing command ctl_thrd pid 3 tid 100064 td 0xfffffe0008047900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a8c3f0 mi_switch() at mi_switch+0x238/frame 0xffffff8234a8c440 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a8c480 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a8c4b0 _sleep() at _sleep+0x3e9/frame 0xffffff8234a8c540 ctl_work_thread() at ctl_work_thread+0x1ce8/frame 0xffffff8234a8cba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a8cbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a8cbf0 --- trap 0, rip = 0, rsp = 0xffffff8234a8ccb0, rbp = 0 --- Tracing command fdc0 pid 2 tid 100061 td 0xfffffe0008048900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a79990 mi_switch() at mi_switch+0x238/frame 0xffffff8234a799e0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a79a20 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234a79a50 _sleep() at _sleep+0x29a/frame 0xffffff8234a79ae0 fdc_thread() at fdc_thread+0x7f4/frame 0xffffff8234a79ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a79bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a79bf0 --- trap 0, rip = 0, rsp = 0xffffff8234a79cb0, rbp = 0 --- Tracing command usb pid 15 tid 100058 td 0xfffffe0008017000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a33a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234a33aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a33ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a33b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234a33b70 usb_process() at usb_process+0x172/frame 0xffffff8234a33ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a33bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a33bf0 --- trap 0, rip = 0, rsp = 0xffffff8234a33cb0, rbp = 0 --- Tracing command usb pid 15 tid 100057 td 0xfffffe0008017480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a2ea50 mi_switch() at mi_switch+0x238/frame 0xffffff8234a2eaa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a2eae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a2eb10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234a2eb70 usb_process() at usb_process+0x172/frame 0xffffff8234a2eba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a2ebf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a2ebf0 --- trap 0, rip = 0, rsp = 0xffffff8234a2ecb0, rbp = 0 --- Tracing command usb pid 15 tid 100056 td 0xfffffe0008017900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a29a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234a29aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a29ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a29b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234a29b70 usb_process() at usb_process+0x172/frame 0xffffff8234a29ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a29bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a29bf0 --- trap 0, rip = 0, rsp = 0xffffff8234a29cb0, rbp = 0 --- Tracing command usb pid 15 tid 100055 td 0xfffffe0008018000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a24a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234a24aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a24ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a24b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234a24b70 usb_process() at usb_process+0x172/frame 0xffffff8234a24ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a24bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a24bf0 --- trap 0, rip = 0, rsp = 0xffffff8234a24cb0, rbp = 0 --- Tracing command usb pid 15 tid 100053 td 0xfffffe0008018900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234897a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234897aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234897ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234897b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234897b70 usb_process() at usb_process+0x172/frame 0xffffff8234897ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234897bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234897bf0 --- trap 0, rip = 0, rsp = 0xffffff8234897cb0, rbp = 0 --- Tracing command usb pid 15 tid 100052 td 0xfffffe0008019000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234892a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234892aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234892ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234892b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234892b70 usb_process() at usb_process+0x172/frame 0xffffff8234892ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234892bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234892bf0 --- trap 0, rip = 0, rsp = 0xffffff8234892cb0, rbp = 0 --- Tracing command usb pid 15 tid 100051 td 0xfffffe0008019480 sched_switch() at sched_switch+0x1b4/frame 0xffffff823488da50 mi_switch() at mi_switch+0x238/frame 0xffffff823488daa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823488dae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823488db10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823488db70 usb_process() at usb_process+0x172/frame 0xffffff823488dba0 fork_exit() at fork_exit+0x139/frame 0xffffff823488dbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823488dbf0 --- trap 0, rip = 0, rsp = 0xffffff823488dcb0, rbp = 0 --- Tracing command usb pid 15 tid 100050 td 0xfffffe0008019900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234888a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234888aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234888ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234888b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234888b70 usb_process() at usb_process+0x172/frame 0xffffff8234888ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234888bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234888bf0 --- trap 0, rip = 0, rsp = 0xffffff8234888cb0, rbp = 0 --- Tracing command usb pid 15 tid 100049 td 0xfffffe000800a000 sched_switch() at sched_switch+0x1b4/frame 0xffffff823483fa50 mi_switch() at mi_switch+0x238/frame 0xffffff823483faa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823483fae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823483fb10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823483fb70 usb_process() at usb_process+0x172/frame 0xffffff823483fba0 fork_exit() at fork_exit+0x139/frame 0xffffff823483fbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823483fbf0 --- trap 0, rip = 0, rsp = 0xffffff823483fcb0, rbp = 0 --- Tracing command usb pid 15 tid 100048 td 0xfffffe000800a480 sched_switch() at sched_switch+0x1b4/frame 0xffffff823483aa50 mi_switch() at mi_switch+0x238/frame 0xffffff823483aaa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823483aae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823483ab10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823483ab70 usb_process() at usb_process+0x172/frame 0xffffff823483aba0 fork_exit() at fork_exit+0x139/frame 0xffffff823483abf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823483abf0 --- trap 0, rip = 0, rsp = 0xffffff823483acb0, rbp = 0 --- Tracing command usb pid 15 tid 100047 td 0xfffffe000800a900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234835a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234835aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234835ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234835b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234835b70 usb_process() at usb_process+0x172/frame 0xffffff8234835ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234835bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234835bf0 --- trap 0, rip = 0, rsp = 0xffffff8234835cb0, rbp = 0 --- Tracing command usb pid 15 tid 100046 td 0xfffffe000800b000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234830a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234830aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234830ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234830b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234830b70 usb_process() at usb_process+0x172/frame 0xffffff8234830ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234830bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234830bf0 --- trap 0, rip = 0, rsp = 0xffffff8234830cb0, rbp = 0 --- Tracing command usb pid 15 tid 100045 td 0xfffffe000800b480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82347e7a50 mi_switch() at mi_switch+0x238/frame 0xffffff82347e7aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82347e7ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff82347e7b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff82347e7b70 usb_process() at usb_process+0x172/frame 0xffffff82347e7ba0 fork_exit() at fork_exit+0x139/frame 0xffffff82347e7bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff82347e7bf0 --- trap 0, rip = 0, rsp = 0xffffff82347e7cb0, rbp = 0 --- Tracing command usb pid 15 tid 100044 td 0xfffffe000800b900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82347e2a50 mi_switch() at mi_switch+0x238/frame 0xffffff82347e2aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82347e2ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff82347e2b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff82347e2b70 usb_process() at usb_process+0x172/frame 0xffffff82347e2ba0 fork_exit() at fork_exit+0x139/frame 0xffffff82347e2bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff82347e2bf0 --- trap 0, rip = 0, rsp = 0xffffff82347e2cb0, rbp = 0 --- Tracing command usb pid 15 tid 100043 td 0xfffffe000800d000 sched_switch() at sched_switch+0x1b4/frame 0xffffff82347dda50 mi_switch() at mi_switch+0x238/frame 0xffffff82347ddaa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82347ddae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff82347ddb10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff82347ddb70 usb_process() at usb_process+0x172/frame 0xffffff82347ddba0 fork_exit() at fork_exit+0x139/frame 0xffffff82347ddbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff82347ddbf0 --- trap 0, rip = 0, rsp = 0xffffff82347ddcb0, rbp = 0 --- Tracing command usb pid 15 tid 100042 td 0xfffffe000800d480 sched_switch() at sched_switch+0x1b4/frame 0xffffff82347d8a50 mi_switch() at mi_switch+0x238/frame 0xffffff82347d8aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff82347d8ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff82347d8b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff82347d8b70 usb_process() at usb_process+0x172/frame 0xffffff82347d8ba0 fork_exit() at fork_exit+0x139/frame 0xffffff82347d8bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff82347d8bf0 --- trap 0, rip = 0, rsp = 0xffffff82347d8cb0, rbp = 0 --- Tracing command usb pid 15 tid 100040 td 0xfffffe000800f000 sched_switch() at sched_switch+0x1b4/frame 0xffffff823478aa50 mi_switch() at mi_switch+0x238/frame 0xffffff823478aaa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823478aae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823478ab10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823478ab70 usb_process() at usb_process+0x172/frame 0xffffff823478aba0 fork_exit() at fork_exit+0x139/frame 0xffffff823478abf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823478abf0 --- trap 0, rip = 0, rsp = 0xffffff823478acb0, rbp = 0 --- Tracing command usb pid 15 tid 100039 td 0xfffffe0008003000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234785a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234785aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234785ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234785b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234785b70 usb_process() at usb_process+0x172/frame 0xffffff8234785ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234785bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234785bf0 --- trap 0, rip = 0, rsp = 0xffffff8234785cb0, rbp = 0 --- Tracing command usb pid 15 tid 100038 td 0xfffffe0008003480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234780a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234780aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234780ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234780b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234780b70 usb_process() at usb_process+0x172/frame 0xffffff8234780ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234780bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234780bf0 --- trap 0, rip = 0, rsp = 0xffffff8234780cb0, rbp = 0 --- Tracing command usb pid 15 tid 100037 td 0xfffffe0008003900 sched_switch() at sched_switch+0x1b4/frame 0xffffff823477ba50 mi_switch() at mi_switch+0x238/frame 0xffffff823477baa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823477bae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823477bb10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823477bb70 usb_process() at usb_process+0x172/frame 0xffffff823477bba0 fork_exit() at fork_exit+0x139/frame 0xffffff823477bbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823477bbf0 --- trap 0, rip = 0, rsp = 0xffffff823477bcb0, rbp = 0 --- Tracing command usb pid 15 tid 100035 td 0xfffffe0008004480 sched_switch() at sched_switch+0x1b4/frame 0xffffff823472da50 mi_switch() at mi_switch+0x238/frame 0xffffff823472daa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823472dae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823472db10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823472db70 usb_process() at usb_process+0x172/frame 0xffffff823472dba0 fork_exit() at fork_exit+0x139/frame 0xffffff823472dbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823472dbf0 --- trap 0, rip = 0, rsp = 0xffffff823472dcb0, rbp = 0 --- Tracing command usb pid 15 tid 100034 td 0xfffffe0008004900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234728a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234728aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234728ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234728b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234728b70 usb_process() at usb_process+0x172/frame 0xffffff8234728ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234728bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234728bf0 --- trap 0, rip = 0, rsp = 0xffffff8234728cb0, rbp = 0 --- Tracing command usb pid 15 tid 100033 td 0xfffffe0008005000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234723a50 mi_switch() at mi_switch+0x238/frame 0xffffff8234723aa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234723ae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234723b10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8234723b70 usb_process() at usb_process+0x172/frame 0xffffff8234723ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234723bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234723bf0 --- trap 0, rip = 0, rsp = 0xffffff8234723cb0, rbp = 0 --- Tracing command usb pid 15 tid 100032 td 0xfffffe0008005480 sched_switch() at sched_switch+0x1b4/frame 0xffffff823471ea50 mi_switch() at mi_switch+0x238/frame 0xffffff823471eaa0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff823471eae0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff823471eb10 _cv_wait() at _cv_wait+0x17d/frame 0xffffff823471eb70 usb_process() at usb_process+0x172/frame 0xffffff823471eba0 fork_exit() at fork_exit+0x139/frame 0xffffff823471ebf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff823471ebf0 --- trap 0, rip = 0, rsp = 0xffffff823471ecb0, rbp = 0 --- Tracing command yarrow pid 14 tid 100017 td 0xfffffe000524a900 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002a3a10 mi_switch() at mi_switch+0x238/frame 0xffffff80002a3a60 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002a3aa0 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff80002a3ad0 _sleep() at _sleep+0x29a/frame 0xffffff80002a3b60 random_kthread() at random_kthread+0x1ad/frame 0xffffff80002a3ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002a3bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002a3bf0 --- trap 0, rip = 0, rsp = 0xffffff80002a3cb0, rbp = 0 --- Tracing command geom pid 13 tid 100015 td 0xfffffe0005233000 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002999f0 mi_switch() at mi_switch+0x238/frame 0xffffff8000299a40 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8000299a80 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8000299ab0 _sleep() at _sleep+0x3e9/frame 0xffffff8000299b40 g_io_schedule_down() at g_io_schedule_down+0x26f/frame 0xffffff8000299b90 g_down_procbody() at g_down_procbody+0x7c/frame 0xffffff8000299ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000299bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000299bf0 --- trap 0, rip = 0, rsp = 0xffffff8000299cb0, rbp = 0 --- Tracing command geom pid 13 tid 100014 td 0xfffffe0005233480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000294a20 mi_switch() at mi_switch+0x238/frame 0xffffff8000294a70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8000294ab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8000294ae0 _sleep() at _sleep+0x3e9/frame 0xffffff8000294b70 g_io_schedule_up() at g_io_schedule_up+0x138/frame 0xffffff8000294b90 g_up_procbody() at g_up_procbody+0x7c/frame 0xffffff8000294ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000294bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000294bf0 --- trap 0, rip = 0, rsp = 0xffffff8000294cb0, rbp = 0 --- Tracing command geom pid 13 tid 100013 td 0xfffffe0005233900 sched_switch() at sched_switch+0x1b4/frame 0xffffff800028fa20 mi_switch() at mi_switch+0x238/frame 0xffffff800028fa70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff800028fab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff800028fae0 _sleep() at _sleep+0x3e9/frame 0xffffff800028fb70 g_run_events() at g_run_events+0x449/frame 0xffffff800028fba0 fork_exit() at fork_exit+0x139/frame 0xffffff800028fbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff800028fbf0 --- trap 0, rip = 0, rsp = 0xffffff800028fcb0, rbp = 0 --- Tracing command intr pid 12 tid 100063 td 0xfffffe0008048000 fork_trampoline() at fork_trampoline Tracing command intr pid 12 tid 100062 td 0xfffffe0008048480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a7eb00 mi_switch() at mi_switch+0x238/frame 0xffffff8234a7eb50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8234a7eba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a7ebf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a7ebf0 --- trap 0, rip = 0, rsp = 0xffffff8234a7ecb0, rbp = 0 --- Tracing command intr pid 12 tid 100060 td 0xfffffe000800f480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a6ab00 mi_switch() at mi_switch+0x238/frame 0xffffff8234a6ab50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8234a6aba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a6abf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a6abf0 --- trap 0, rip = 0, rsp = 0xffffff8234a6acb0, rbp = 0 --- Tracing command intr pid 12 tid 100059 td 0xfffffe000800f900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a5bb00 mi_switch() at mi_switch+0x238/frame 0xffffff8234a5bb50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8234a5bba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a5bbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a5bbf0 --- trap 0, rip = 0, rsp = 0xffffff8234a5bcb0, rbp = 0 --- Tracing command intr pid 12 tid 100054 td 0xfffffe0008018480 fork_trampoline() at fork_trampoline Tracing command intr pid 12 tid 100041 td 0xfffffe000800d900 sched_switch() at sched_switch+0x1b4/frame 0xffffff82347d3b00 mi_switch() at mi_switch+0x238/frame 0xffffff82347d3b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff82347d3ba0 fork_exit() at fork_exit+0x139/frame 0xffffff82347d3bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff82347d3bf0 --- trap 0, rip = 0, rsp = 0xffffff82347d3cb0, rbp = 0 --- Tracing command intr pid 12 tid 100036 td 0xfffffe0008004000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234776b00 mi_switch() at mi_switch+0x238/frame 0xffffff8234776b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8234776ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234776bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234776bf0 --- trap 0, rip = 0, rsp = 0xffffff8234776cb0, rbp = 0 --- Tracing command intr pid 12 tid 100031 td 0xfffffe0008005900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234719b00 mi_switch() at mi_switch+0x238/frame 0xffffff8234719b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8234719ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234719bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234719bf0 --- trap 0, rip = 0, rsp = 0xffffff8234719cb0, rbp = 0 --- Tracing command intr pid 12 tid 100030 td 0xfffffe000524c480 sched_switch() at sched_switch+0x1b4/frame 0xffffff80003ecb00 mi_switch() at mi_switch+0x238/frame 0xffffff80003ecb50 ithread_loop() at ithread_loop+0x273/frame 0xffffff80003ecba0 fork_exit() at fork_exit+0x139/frame 0xffffff80003ecbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80003ecbf0 --- trap 0, rip = 0, rsp = 0xffffff80003eccb0, rbp = 0 --- Tracing command intr pid 12 tid 100029 td 0xfffffe000524c900 sched_switch() at sched_switch+0x1b4/frame 0xffffff80003e7b00 mi_switch() at mi_switch+0x238/frame 0xffffff80003e7b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff80003e7ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80003e7bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80003e7bf0 --- trap 0, rip = 0, rsp = 0xffffff80003e7cb0, rbp = 0 --- Tracing command intr pid 12 tid 100028 td 0xfffffe00053e9000 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002dbb00 mi_switch() at mi_switch+0x238/frame 0xffffff80002dbb50 ithread_loop() at ithread_loop+0x273/frame 0xffffff80002dbba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002dbbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002dbbf0 --- trap 0, rip = 0, rsp = 0xffffff80002dbcb0, rbp = 0 --- Tracing command intr pid 12 tid 100023 td 0xfffffe00053ea900 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002c2b00 mi_switch() at mi_switch+0x238/frame 0xffffff80002c2b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff80002c2ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002c2bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002c2bf0 --- trap 0, rip = 0, rsp = 0xffffff80002c2cb0, rbp = 0 --- Tracing command intr pid 12 tid 100022 td 0xfffffe0005249000 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002bdb00 mi_switch() at mi_switch+0x238/frame 0xffffff80002bdb50 ithread_loop() at ithread_loop+0x273/frame 0xffffff80002bdba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002bdbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002bdbf0 --- trap 0, rip = 0, rsp = 0xffffff80002bdcb0, rbp = 0 --- Tracing command intr pid 12 tid 100021 td 0xfffffe0005249480 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002b8b00 mi_switch() at mi_switch+0x238/frame 0xffffff80002b8b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff80002b8ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002b8bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002b8bf0 --- trap 0, rip = 0, rsp = 0xffffff80002b8cb0, rbp = 0 --- Tracing command intr pid 12 tid 100019 td 0xfffffe000524a000 fork_trampoline() at fork_trampoline Tracing command intr pid 12 tid 100012 td 0xfffffe0005234000 fork_trampoline() at fork_trampoline Tracing command intr pid 12 tid 100011 td 0xfffffe0005234480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000285b00 mi_switch() at mi_switch+0x238/frame 0xffffff8000285b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8000285ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000285bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000285bf0 --- trap 0, rip = 0, rsp = 0xffffff8000285cb0, rbp = 0 --- Tracing command intr pid 12 tid 100010 td 0xfffffe0005234900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000280b00 mi_switch() at mi_switch+0x238/frame 0xffffff8000280b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8000280ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000280bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000280bf0 --- trap 0, rip = 0, rsp = 0xffffff8000280cb0, rbp = 0 --- Tracing command intr pid 12 tid 100009 td 0xfffffe0005218480 sched_switch() at sched_switch+0x1b4/frame 0xffffff800027bb00 mi_switch() at mi_switch+0x238/frame 0xffffff800027bb50 ithread_loop() at ithread_loop+0x273/frame 0xffffff800027bba0 fork_exit() at fork_exit+0x139/frame 0xffffff800027bbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff800027bbf0 --- trap 0, rip = 0, rsp = 0xffffff800027bcb0, rbp = 0 --- Tracing command intr pid 12 tid 100008 td 0xfffffe0005218900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000276b00 mi_switch() at mi_switch+0x238/frame 0xffffff8000276b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8000276ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000276bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000276bf0 --- trap 0, rip = 0, rsp = 0xffffff8000276cb0, rbp = 0 --- Tracing command intr pid 12 tid 100007 td 0xfffffe0005221000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000271b00 mi_switch() at mi_switch+0x238/frame 0xffffff8000271b50 ithread_loop() at ithread_loop+0x273/frame 0xffffff8000271ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000271bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000271bf0 --- trap 0, rip = 0, rsp = 0xffffff8000271cb0, rbp = 0 --- Tracing command idle pid 11 tid 100006 td 0xfffffe0005221480 sched_switch() at sched_switch+0x1b4/frame 0xffffff800026cac0 mi_switch() at mi_switch+0x238/frame 0xffffff800026cb10 sched_idletd() at sched_idletd+0x345/frame 0xffffff800026cba0 fork_exit() at fork_exit+0x139/frame 0xffffff800026cbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff800026cbf0 --- trap 0, rip = 0, rsp = 0xffffff800026ccb0, rbp = 0 --- Tracing command idle pid 11 tid 100005 td 0xfffffe0005221900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000267ac0 mi_switch() at mi_switch+0x238/frame 0xffffff8000267b10 sched_idletd() at sched_idletd+0x1f5/frame 0xffffff8000267ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000267bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000267bf0 --- trap 0, rip = 0, rsp = 0xffffff8000267cb0, rbp = 0 --- Tracing command idle pid 11 tid 100004 td 0xfffffe0005215000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000262ac0 mi_switch() at mi_switch+0x238/frame 0xffffff8000262b10 sched_idletd() at sched_idletd+0x345/frame 0xffffff8000262ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000262bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000262bf0 --- trap 0, rip = 0, rsp = 0xffffff8000262cb0, rbp = 0 --- Tracing command idle pid 11 tid 100003 td 0xfffffe0005215480 sched_switch() at sched_switch+0x1b4/frame 0xffffff800025dac0 mi_switch() at mi_switch+0x238/frame 0xffffff800025db10 sched_idletd() at sched_idletd+0x345/frame 0xffffff800025dba0 fork_exit() at fork_exit+0x139/frame 0xffffff800025dbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff800025dbf0 --- trap 0, rip = 0, rsp = 0xffffff800025dcb0, rbp = 0 --- Tracing command init pid 1 tid 100002 td 0xfffffe0005215900 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000258670 mi_switch() at mi_switch+0x238/frame 0xffffff80002586c0 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8000258700 sleepq_catch_signals() at sleepq_catch_signals+0x2c6/frame 0xffffff8000258760 sleepq_wait_sig() at sleepq_wait_sig+0x16/frame 0xffffff8000258780 _sleep() at _sleep+0x37d/frame 0xffffff8000258810 kern_wait6() at kern_wait6+0x5f1/frame 0xffffff80002588b0 kern_wait() at kern_wait+0x9c/frame 0xffffff8000258a10 sys_wait4() at sys_wait4+0x35/frame 0xffffff8000258ad0 amd64_syscall() at amd64_syscall+0x2d3/frame 0xffffff8000258bf0 Xfast_syscall() at Xfast_syscall+0xf7/frame 0xffffff8000258bf0 --- syscall (7, FreeBSD ELF64, sys_wait4), rip = 0x41242a, rsp = 0x7fffffffd798, rbp = 0x8a --- Tracing command audit pid 10 tid 100001 td 0xfffffe0005218000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8000253a00 mi_switch() at mi_switch+0x238/frame 0xffffff8000253a50 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8000253a90 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8000253ac0 _cv_wait() at _cv_wait+0x17d/frame 0xffffff8000253b20 audit_worker() at audit_worker+0x77/frame 0xffffff8000253ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8000253bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8000253bf0 --- trap 0, rip = 0, rsp = 0xffffff8000253cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100066 td 0xfffffe0008047000 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a96a10 mi_switch() at mi_switch+0x238/frame 0xffffff8234a96a60 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a96aa0 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffff8234a96ad0 _sleep() at _sleep+0x29a/frame 0xffffff8234a96b60 deadlkres() at deadlkres+0x2c3/frame 0xffffff8234a96ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a96bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a96bf0 --- trap 0, rip = 0, rsp = 0xffffff8234a96cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100065 td 0xfffffe0008047480 sched_switch() at sched_switch+0x1b4/frame 0xffffff8234a91a40 mi_switch() at mi_switch+0x238/frame 0xffffff8234a91a90 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff8234a91ad0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff8234a91b00 msleep_spin() at msleep_spin+0x22d/frame 0xffffff8234a91b70 taskqueue_thread_loop() at taskqueue_thread_loop+0x6f/frame 0xffffff8234a91ba0 fork_exit() at fork_exit+0x139/frame 0xffffff8234a91bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff8234a91bf0 --- trap 0, rip = 0, rsp = 0xffffff8234a91cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100027 td 0xfffffe00053e9480 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002d6a40 mi_switch() at mi_switch+0x238/frame 0xffffff80002d6a90 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002d6ad0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff80002d6b00 msleep_spin() at msleep_spin+0x22d/frame 0xffffff80002d6b70 taskqueue_thread_loop() at taskqueue_thread_loop+0x6f/frame 0xffffff80002d6ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002d6bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002d6bf0 --- trap 0, rip = 0, rsp = 0xffffff80002d6cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100026 td 0xfffffe00053e9900 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002d1a40 mi_switch() at mi_switch+0x238/frame 0xffffff80002d1a90 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002d1ad0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff80002d1b00 msleep_spin() at msleep_spin+0x22d/frame 0xffffff80002d1b70 taskqueue_thread_loop() at taskqueue_thread_loop+0x6f/frame 0xffffff80002d1ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002d1bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002d1bf0 --- trap 0, rip = 0, rsp = 0xffffff80002d1cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100025 td 0xfffffe00053ea000 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002cca40 mi_switch() at mi_switch+0x238/frame 0xffffff80002cca90 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002ccad0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff80002ccb00 msleep_spin() at msleep_spin+0x22d/frame 0xffffff80002ccb70 taskqueue_thread_loop() at taskqueue_thread_loop+0x6f/frame 0xffffff80002ccba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002ccbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002ccbf0 --- trap 0, rip = 0, rsp = 0xffffff80002cccb0, rbp = 0 --- Tracing command kernel pid 0 tid 100024 td 0xfffffe00053ea480 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002c7a20 mi_switch() at mi_switch+0x238/frame 0xffffff80002c7a70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002c7ab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff80002c7ae0 _sleep() at _sleep+0x3e9/frame 0xffffff80002c7b70 taskqueue_thread_loop() at taskqueue_thread_loop+0xc7/frame 0xffffff80002c7ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002c7bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002c7bf0 --- trap 0, rip = 0, rsp = 0xffffff80002c7cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100020 td 0xfffffe0005249900 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002b3a20 mi_switch() at mi_switch+0x238/frame 0xffffff80002b3a70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002b3ab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff80002b3ae0 _sleep() at _sleep+0x3e9/frame 0xffffff80002b3b70 taskqueue_thread_loop() at taskqueue_thread_loop+0xc7/frame 0xffffff80002b3ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002b3bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002b3bf0 --- trap 0, rip = 0, rsp = 0xffffff80002b3cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100018 td 0xfffffe000524a480 sched_switch() at sched_switch+0x1b4/frame 0xffffff80002a9a20 mi_switch() at mi_switch+0x238/frame 0xffffff80002a9a70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff80002a9ab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff80002a9ae0 _sleep() at _sleep+0x3e9/frame 0xffffff80002a9b70 taskqueue_thread_loop() at taskqueue_thread_loop+0xc7/frame 0xffffff80002a9ba0 fork_exit() at fork_exit+0x139/frame 0xffffff80002a9bf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff80002a9bf0 --- trap 0, rip = 0, rsp = 0xffffff80002a9cb0, rbp = 0 --- Tracing command kernel pid 0 tid 100016 td 0xfffffe000524c000 sched_switch() at sched_switch+0x1b4/frame 0xffffff800029ea20 mi_switch() at mi_switch+0x238/frame 0xffffff800029ea70 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffff800029eab0 sleepq_wait() at sleepq_wait+0x4d/frame 0xffffff800029eae0 _sleep() at _sleep+0x3e9/frame 0xffffff800029eb70 taskqueue_thread_loop() at taskqueue_thread_loop+0xc7/frame 0xffffff800029eba0 fork_exit() at fork_exit+0x139/frame 0xffffff800029ebf0 fork_trampoline() at fork_trampoline+0xe/frame 0xffffff800029ebf0 --- trap 0, rip = 0, rsp = 0xffffff800029ecb0, rbp = 0 --- Tracing command kernel pid 0 tid 100000 td 0xffffffff81358110 sched_switch() at sched_switch+0x1b4/frame 0xffffffff818f2b00 mi_switch() at mi_switch+0x238/frame 0xffffffff818f2b50 sleepq_switch() at sleepq_switch+0xfe/frame 0xffffffff818f2b90 sleepq_timedwait() at sleepq_timedwait+0x4d/frame 0xffffffff818f2bc0 _sleep() at _sleep+0x29a/frame 0xffffffff818f2c50 scheduler() at scheduler+0x2b0/frame 0xffffffff818f2c90 mi_startup() at mi_startup+0x77/frame 0xffffffff818f2cb0 btext() at btext+0x2c db:0:allt> call doadump Dumping 1503 out of 8040 MB:..2%..11%..21%..31%..41%..51%..61%..71%..81%..91% Dump complete = 0 db:0:doadump> reset cpu_reset: Restarting BSP cpu_reset_proxy: Stopped CPU 3 (kgdb) bt #0 doadump (textdump=0x5214000) at ../../../kern/kern_shutdown.c:263 #1 0xffffffff803431dc in db_fncall (dummy1=, dummy2=, dummy3=, dummy4=) at ../../../ddb/db_command.c:578 #2 0xffffffff8034348d in db_command (last_cmdp=0xffffffff8131f660, cmd_table=, dopager=0x0) at ../../../ddb/db_command.c:449 #3 0xffffffff80348023 in db_script_exec (scriptname=0xffffffff8131ff00 "doadump", warnifnotfound=0x1) at ../../../ddb/db_script.c:302 #4 0xffffffff80343511 in db_command (last_cmdp=0xffffffff8131f660, cmd_table=, dopager=0x1) at ../../../ddb/db_command.c:449 #5 0xffffffff80343760 in db_command_loop () at ../../../ddb/db_command.c:502 #6 0xffffffff803458d9 in db_trap (type=, code=) at ../../../ddb/db_main.c:231 #7 0xffffffff80916ac8 in kdb_trap (type=0x3, code=0x0, tf=0xffffff8247c8d0e0) at ../../../kern/subr_kdb.c:654 #8 0xffffffff80c8051d in trap (frame=0xffffff8247c8d0e0) at ../../../amd64/amd64/trap.c:579 #9 0xffffffff80c69183 in calltrap () at ../../../amd64/amd64/exception.S:228 #10 0xffffffff8091654b in kdb_enter (why=0xffffffff80ef250e "panic", msg=0x80
) at cpufunc.h:63 #11 0xffffffff808dbfe1 in vpanic (fmt=, ap=) at ../../../kern/kern_shutdown.c:746 #12 0xffffffff808dc1c3 in kassert_panic (fmt=0xffffffff80f01032 "scratch bp !B_KVAALLOC %p") at ../../../kern/kern_shutdown.c:641 #13 0xffffffff8096b29c in getblk (vp=0xfffffe0101f6c750, blkno=0xfffffffffffffff4, size=0x8000, slpflag=0x0, slptimeo=0x0, flags=0x0) at ../../../kern/vfs_bio.c:2904 #14 0xffffffff8096b6d0 in breadn_flags (vp=0xfffffe0101f6c750, blkno=, size=, rablkno=0x0, rabsize=0x0, cnt=0x0, cred=0x0, flags=0x0, bpp=0xffffff8247c8d550) at ../../../kern/vfs_bio.c:969 #15 0xffffffff80b0b646 in ffs_balloc_ufs2 (vp=0xfffffe0101f6c750, startoffset=, size=, cred=0xfffffe000b160300, flags=0x10000, bpp=0xffffff8247c8d628) at ../../../ufs/ffs/ffs_balloc.c:834 #16 0xffffffff80b3db58 in ufs_direnter (dvp=0xfffffe0101f6c750, tvp=0xfffffe00d3e28750, dirp=0xffffff8247c8d6e0, cnp=, newdirbp=0x0, isrename=0x0) at ../../../ufs/ufs/ufs_lookup.c:912 #17 0xffffffff80b43a69 in ufs_makeinode (mode=, dvp=0xfffffe0101f6c750, vpp=0xffffff8247c8da00, cnp=0xffffff8247c8da28) at ../../../ufs/ufs/ufs_vnops.c:2724 #18 0xffffffff80d1ddf7 in VOP_CREATE_APV (vop=0xffffffff812eeae0, a=0xffffff8247c8d930) at vnode_if.c:257 #19 0xffffffff80991efe in vn_open_cred (ndp=0xffffff8247c8d9c0, flagp=0xffffff8247c8d9bc, cmode=0x1a0, vn_open_flags=, cred=0xfffffe000b160300, fp=0xfffffe000f13ea00) at vnode_if.h:109 #20 0xffffffff8098c91e in kern_openat (td=0xfffffe0101923480, fd=0xffffff9c, path=0x7fffffffd5e0
, pathseg=UIO_USERSPACE, flags=0x602, mode=0x1b0) at ../../../kern/vfs_syscalls.c:1086 #21 0xffffffff80c7f2f3 in amd64_syscall (td=0xfffffe0101923480, traced=0x0) at subr_syscall.c:134 #22 0xffffffff80c69467 in Xfast_syscall () at ../../../amd64/amd64/exception.S:387 #23 0x000000080092cb9a in ?? () Previous frame inner to this frame (corrupt stack?) (kgdb) f 13 #13 0xffffffff8096b29c in getblk (vp=0xfffffe0101f6c750, blkno=0xfffffffffffffff4, size=0x8000, slpflag=0x0, slptimeo=0x0, flags=0x0) at ../../../kern/vfs_bio.c:2904 2904 KASSERT((scratch_bp->b_flags & B_KVAALLOC) != 0, (kgdb) l 2899 panic("GB_NOWAIT_BD and B_UNMAPPED %p", bp); 2900 } 2901 atomic_add_int(&mappingrestarts, 1); 2902 goto mapping_loop; 2903 } 2904 KASSERT((scratch_bp->b_flags & B_KVAALLOC) != 0, 2905 ("scratch bp !B_KVAALLOC %p", scratch_bp)); 2906 setbufkva(bp, (vm_offset_t)scratch_bp->b_kvaalloc, 2907 scratch_bp->b_kvasize, gbflags); 2908 (kgdb) info loc bp = (struct buf *) 0xffffff81e6e6bca0 bo = (struct bufobj *) 0xfffffe0101f6c8b8 bsize = error = maxsize = vmio = offset = (kgdb) p scratch_bp No symbol "scratch_bp" in current context. (kgdb) p *(struct buf *)0xffffff81e7d323a0 $1 = {b_bufobj = 0x0, b_bcount = 0x0, b_caller1 = 0x0, b_data = 0xffffff820c434000
, b_error = 0x0, b_iocmd = 0x2, b_ioflags = 0x0, b_iooffset = 0x61220000, b_resid = 0x0, b_iodone = 0, b_blkno = 0x0, b_offset = 0xffffffffffffffff, b_bobufs = {tqe_next = 0xffffff81e7a521a0, tqe_prev = 0xffffff81e842fef0}, b_left = 0xffffff81e80d5ea0, b_right = 0xffffff81e91987a0, b_vflags = 0x0, b_freelist = { tqe_next = 0xffffff81e7a68ca0, tqe_prev = 0xffffffff8154f1d0}, b_qindex = 0x0, b_flags = 0x0, b_xflags = 0x0, b_lock = {lock_object = {lo_name = 0xffffffff80f01077 "bufwait", lo_flags = 0x5730000, lo_data = 0x0, lo_witness = 0xffffff80006cd280}, lk_lock = 0xfffffe0101923480, lk_exslpfail = 0x0, lk_timo = 0x0, lk_pri = 0x60, lk_stack = {depth = 0xc, pcs = {0xffffffff808bf90f, 0xffffffff80968eef, 0xffffffff8096ae2e, 0xffffffff8096b6d0, 0xffffffff80b0b646, 0xffffffff80b3db58, 0xffffffff80b43a69, 0xffffffff80d1ddf7, 0xffffffff80991efe, 0xffffffff8098c91e, 0xffffffff80c7f2f3, 0xffffffff80c69467, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}}}, b_bufsize = 0x0, b_runningbufspace = 0x0, b_kvabase = 0xffffff820c434000
, b_kvaalloc = 0x0, b_kvasize = 0x8000, b_lblkno = 0x0, b_vp = 0x0, b_dirtyoff = 0x0, b_dirtyend = 0x0, b_rcred = 0x0, b_wcred = 0x0, b_saveaddr = 0xffffff820c434000, b_pager = {pg_reqpage = 0x0}, b_cluster = {cluster_head = {tqh_first = 0xffffff81e75088a0, tqh_last = 0xffffff81e8e14660}, cluster_entry = {tqe_next = 0xffffff81e75088a0, tqe_prev = 0xffffff81e8e14660}}, b_pages = {0x0 }, b_npages = 0x0, b_dep = {lh_first = 0x0}, b_fsprivate1 = 0x0, b_fsprivate2 = 0x0, b_fsprivate3 = 0x0, b_pin_count = 0x0} (kgdb) p *vp $2 = {v_tag = 0xffffffff80edfc17 "ufs", v_op = 0xffffffff812ee000, v_data = 0xfffffe0149ebcb28, v_mount = 0xfffffe000bd1e790, v_nmntvnodes = {tqe_next = 0xfffffe016052b9c0, tqe_prev = 0xfffffe007063ec50}, v_un = {vu_mount = 0x0, vu_socket = 0x0, vu_cdev = 0x0, vu_fifoinfo = 0x0}, v_hashlist = {le_next = 0x0, le_prev = 0xffffff80023cbae8}, v_cache_src = {lh_first = 0x0}, v_cache_dst = {tqh_first = 0xfffffe00d2774770, tqh_last = 0xfffffe00d2774790}, v_cache_dd = 0xfffffe00d2774770, v_lock = {lock_object = {lo_name = 0xffffffff80edfc17 "ufs", lo_flags = 0x57b0000, lo_data = 0x0, lo_witness = 0xffffff80006d2300}, lk_lock = 0xfffffe0101923480, lk_exslpfail = 0x0, lk_timo = 0x33, lk_pri = 0x60, lk_stack = {depth = 0xa, pcs = {0xffffffff808bff64, 0xffffffff80b34a0b, 0xffffffff80d1cc58, 0xffffffff8099032e, 0xffffffff80975407, 0xffffffff809763b0, 0xffffffff80991cfb, 0xffffffff8098c91e, 0xffffffff80c7f2f3, 0xffffffff80c69467, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}}}, v_interlock = {lock_object = {lo_name = 0xffffffff80efb102 "vnode interlock", lo_flags = 0x1030000, lo_data = 0x0, lo_witness = 0xffffff80006c9600}, mtx_lock = 0x4}, v_vnlock = 0xfffffe0101f6c7b8, v_actfreelist = {tqe_next = 0xfffffe007063ec30, tqe_prev = 0xfffffe016052bb18}, v_bufobj = {bo_mtx = {lock_object = {lo_name = 0xffffffff80f047f5 "bufobj interlock", lo_flags = 0x1030000, lo_data = 0x0, lo_witness = 0xffffff80006d0200}, mtx_lock = 0x4}, bo_ops = 0xffffffff812a7de0, bo_object = 0xfffffe01e1fff5a0, bo_synclist = {le_next = 0xfffffe007063ed98, le_prev = 0xfffffe00052402a0}, bo_private = 0xfffffe0101f6c750, __bo_vnode = 0xfffffe0101f6c750, bo_clean = {bv_hd = {tqh_first = 0x0, tqh_last = 0xfffffe0101f6c908}, bv_root = 0x0, bv_cnt = 0x0}, bo_dirty = {bv_hd = {tqh_first = 0xffffff81e6e6bca0, tqh_last = 0xffffff81e74440f0}, bv_root = 0xffffff81e6e6bca0, bv_cnt = 0xd}, bo_numoutput = 0x0, bo_flag = 0x1, bo_bsize = 0x8000}, v_pollinfo = 0x0, v_label = 0x0, v_lockf = 0x0, v_rl = {rl_waiters = {tqh_first = 0x0, tqh_last = 0xfffffe0101f6c970}, rl_currdep = 0x0}, v_cstart = 0x0, v_lasta = 0x0, v_lastw = 0x0, v_clen = 0x0, v_holdcnt = 0x10, v_usecount = 0x2, v_iflag = 0x200, v_vflag = 0x0, v_writecount = 0x0, v_hash = 0x187b9, v_type = VDIR} (kgdb) git diff master diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c index c6c62ae..ef4ad07 100644 --- a/sys/amd64/amd64/pmap.c +++ b/sys/amd64/amd64/pmap.c @@ -4274,6 +4274,30 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) pagecopy((void *)src, (void *)dst); } +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + void *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + a_cp = (char *)PHYS_TO_DMAP(ma[a_offset >> PAGE_SHIFT]-> + phys_addr) + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + b_cp = (char *)PHYS_TO_DMAP(mb[b_offset >> PAGE_SHIFT]-> + phys_addr) + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } +} + /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may diff --git a/sys/arm/arm/pmap-v6.c b/sys/arm/arm/pmap-v6.c index f05f120..1f7bd4d 100644 --- a/sys/arm/arm/pmap-v6.c +++ b/sys/arm/arm/pmap-v6.c @@ -3313,6 +3313,42 @@ pmap_copy_page_generic(vm_paddr_t src, vm_paddr_t dst) } void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + vm_page_t a_pg, b_pg; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + mtx_lock(&cmtx); + while (xfersize > 0) { + a_pg = ma[a_offset >> PAGE_SHIFT]; + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + b_pg = mb[b_offset >> PAGE_SHIFT]; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + *csrc_pte = L2_S_PROTO | VM_PAGE_TO_PHYS(a_pg) | + pte_l2_s_cache_mode; + pmap_set_prot(csrc_pte, VM_PROT_READ, 0); + PTE_SYNC(csrc_pte); + *cdst_pte = L2_S_PROTO | VM_PAGE_TO_PHYS(b_pg) | + pte_l2_s_cache_mode; + pmap_set_prot(cdst_pte, VM_PROT_READ | VM_PROT_WRITE, 0); + PTE_SYNC(cdst_pte); + cpu_tlb_flushD_SE(csrcp); + cpu_tlb_flushD_SE(cdstp); + cpu_cpwait(); + bcopy((char *)csrcp + a_pg_offset, (char *)cdstp + b_pg_offset, + cnt); + cpu_idcache_wbinv_range(cdstp + b_pg_offset, cnt); + pmap_l2cache_wbinv_range(cdstp + b_pg_offset, + VM_PAGE_TO_PHYS(b_pg) + b_pg_offset, cnt); + } + mtx_unlock(&cmtx); +} + +void pmap_copy_page(vm_page_t src, vm_page_t dst) { diff --git a/sys/arm/arm/pmap.c b/sys/arm/arm/pmap.c index c9cb42c..89d38fb 100644 --- a/sys/arm/arm/pmap.c +++ b/sys/arm/arm/pmap.c @@ -258,6 +258,9 @@ pt_entry_t pte_l1_c_proto; pt_entry_t pte_l2_s_proto; void (*pmap_copy_page_func)(vm_paddr_t, vm_paddr_t); +void (*pmap_copy_page_offs_func)(vm_paddr_t a_phys, + vm_offset_t a_offs, vm_paddr_t b_phys, vm_offset_t b_offs, + int cnt); void (*pmap_zero_page_func)(vm_paddr_t, int, int); struct msgbuf *msgbufp = 0; @@ -401,6 +404,13 @@ static struct vm_object pvzone_obj; static int pv_entry_count=0, pv_entry_max=0, pv_entry_high_water=0; static struct rwlock pvh_global_lock; +void pmap_copy_page_offs_generic(vm_paddr_t a_phys, vm_offset_t a_offs, + vm_paddr_t b_phys, vm_offset_t b_offs, int cnt); +#if ARM_MMU_XSCALE == 1 +void pmap_copy_page_offs_xscale(vm_paddr_t a_phys, vm_offset_t a_offs, + vm_paddr_t b_phys, vm_offset_t b_offs, int cnt); +#endif + /* * This list exists for the benefit of pmap_map_chunk(). It keeps track * of the kernel L2 tables during bootstrap, so that pmap_map_chunk() can @@ -485,6 +495,7 @@ pmap_pte_init_generic(void) pte_l2_s_proto = L2_S_PROTO_generic; pmap_copy_page_func = pmap_copy_page_generic; + pmap_copy_page_offs_func = pmap_copy_page_offs_generic; pmap_zero_page_func = pmap_zero_page_generic; } @@ -661,6 +672,7 @@ pmap_pte_init_xscale(void) #ifdef CPU_XSCALE_CORE3 pmap_copy_page_func = pmap_copy_page_generic; + pmap_copy_page_offs_func = pmap_copy_page_offs_generic; pmap_zero_page_func = pmap_zero_page_generic; xscale_use_minidata = 0; /* Make sure it is L2-cachable */ @@ -673,6 +685,7 @@ pmap_pte_init_xscale(void) #else pmap_copy_page_func = pmap_copy_page_xscale; + pmap_copy_page_offs_func = pmap_copy_page_offs_xscale; pmap_zero_page_func = pmap_zero_page_xscale; #endif @@ -4300,6 +4313,29 @@ pmap_copy_page_generic(vm_paddr_t src, vm_paddr_t dst) cpu_l2cache_inv_range(csrcp, PAGE_SIZE); cpu_l2cache_wbinv_range(cdstp, PAGE_SIZE); } + +void +pmap_copy_page_offs_generic(vm_paddr_t a_phys, vm_offset_t a_offs, + vm_paddr_t b_phys, vm_offset_t b_offs, int cnt) +{ + + mtx_lock(&cmtx); + *csrc_pte = L2_S_PROTO | a_phys | + L2_S_PROT(PTE_KERNEL, VM_PROT_READ) | pte_l2_s_cache_mode; + PTE_SYNC(csrc_pte); + *cdst_pte = L2_S_PROTO | b_phys | + L2_S_PROT(PTE_KERNEL, VM_PROT_WRITE) | pte_l2_s_cache_mode; + PTE_SYNC(cdst_pte); + cpu_tlb_flushD_SE(csrcp); + cpu_tlb_flushD_SE(cdstp); + cpu_cpwait(); + bcopy((char *)csrcp + a_offs, (char *)cdstp + b_offs, cnt); + mtx_unlock(&cmtx); + cpu_dcache_inv_range(csrcp + a_offs, cnt); + cpu_dcache_wbinv_range(cdstp + b_offs, cnt); + cpu_l2cache_inv_range(csrcp + a_offs, cnt); + cpu_l2cache_wbinv_range(cdstp + b_offs, cnt); +} #endif /* (ARM_MMU_GENERIC + ARM_MMU_SA1) != 0 */ #if ARM_MMU_XSCALE == 1 @@ -4344,6 +4380,28 @@ pmap_copy_page_xscale(vm_paddr_t src, vm_paddr_t dst) mtx_unlock(&cmtx); xscale_cache_clean_minidata(); } + +void +pmap_copy_page_offs_xscale(vm_paddr_t a_phys, vm_offset_t a_offs, + vm_paddr_t b_phys, vm_offset_t b_offs, int cnt) +{ + + mtx_lock(&cmtx); + *csrc_pte = L2_S_PROTO | a_phys | + L2_S_PROT(PTE_KERNEL, VM_PROT_READ) | + L2_C | L2_XSCALE_T_TEX(TEX_XSCALE_X); + PTE_SYNC(csrc_pte); + *cdst_pte = L2_S_PROTO | b_phys | + L2_S_PROT(PTE_KERNEL, VM_PROT_WRITE) | + L2_C | L2_XSCALE_T_TEX(TEX_XSCALE_X); + PTE_SYNC(cdst_pte); + cpu_tlb_flushD_SE(csrcp); + cpu_tlb_flushD_SE(cdstp); + cpu_cpwait(); + bcopy((char *)csrcp + a_offs, (char *)cdstp + b_offs, cnt); + mtx_unlock(&cmtx); + xscale_cache_clean_minidata(); +} #endif /* ARM_MMU_XSCALE == 1 */ void @@ -4370,8 +4428,38 @@ pmap_copy_page(vm_page_t src, vm_page_t dst) #endif } +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + vm_page_t a_pg, b_pg; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; +#ifdef ARM_USE_SMALL_ALLOC + vm_offset_t a_va, b_va; +#endif - + cpu_dcache_wbinv_all(); + cpu_l2cache_wbinv_all(); + while (xfersize > 0) { + a_pg = ma[a_offset >> PAGE_SHIFT]; + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + b_pg = mb[b_offset >> PAGE_SHIFT]; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); +#ifdef ARM_USE_SMALL_ALLOC + a_va = arm_ptovirt(VM_PAGE_TO_PHYS(a_pg)) + a_pg_offset; + b_va = arm_ptovirt(VM_PAGE_TO_PHYS(b_pg)) + b_pg_offset; + bcopy((char *)a_va, (char *)b_va, cnt); + cpu_dcache_wbinv_range(b_va, cnt); + cpu_l2cache_wbinv_range(b_va, cnt); +#else + pmap_copy_page_offs_func(VM_PAGE_TO_PHYS(a_pg), a_pg_offset, + VM_PAGE_TO_PHYS(b_pg), b_pg_offset, cnt); +#endif + } +} /* * this routine returns true if a physical page resides diff --git a/sys/arm/include/pmap.h b/sys/arm/include/pmap.h index 4f7566e..9d6c340 100644 --- a/sys/arm/include/pmap.h +++ b/sys/arm/include/pmap.h @@ -533,6 +533,8 @@ extern pt_entry_t pte_l1_c_proto; extern pt_entry_t pte_l2_s_proto; extern void (*pmap_copy_page_func)(vm_paddr_t, vm_paddr_t); +extern void (*pmap_copy_page_offs_func)(vm_paddr_t a_phys, + vm_offset_t a_offs, vm_paddr_t b_phys, vm_offset_t b_offs, int cnt); extern void (*pmap_zero_page_func)(vm_paddr_t, int, int); #if (ARM_MMU_GENERIC + ARM_MMU_V6 + ARM_MMU_V7 + ARM_MMU_SA1) != 0 || defined(CPU_XSCALE_81342) diff --git a/sys/cam/ata/ata_da.c b/sys/cam/ata/ata_da.c index 4252197..c700e7c 100644 --- a/sys/cam/ata/ata_da.c +++ b/sys/cam/ata/ata_da.c @@ -1167,6 +1167,8 @@ adaregister(struct cam_periph *periph, void *arg) ((softc->flags & ADA_FLAG_CAN_CFA) && !(softc->flags & ADA_FLAG_CAN_48BIT))) softc->disk->d_flags |= DISKFLAG_CANDELETE; + if ((cpi.hba_misc & PIM_UNMAPPED) != 0) + softc->disk->d_flags |= DISKFLAG_UNMAPPED_BIO; strlcpy(softc->disk->d_descr, cgd->ident_data.model, MIN(sizeof(softc->disk->d_descr), sizeof(cgd->ident_data.model))); strlcpy(softc->disk->d_ident, cgd->ident_data.serial, @@ -1431,13 +1433,19 @@ adastart(struct cam_periph *periph, union ccb *start_ccb) return; } #endif + KASSERT((bp->bio_flags & BIO_UNMAPPED) == 0 || + round_page(bp->bio_bcount + bp->bio_ma_offset) / + PAGE_SIZE == bp->bio_ma_n, + ("Short bio %p", bp)); cam_fill_ataio(ataio, ada_retry_count, adadone, - bp->bio_cmd == BIO_READ ? - CAM_DIR_IN : CAM_DIR_OUT, + (bp->bio_cmd == BIO_READ ? CAM_DIR_IN : + CAM_DIR_OUT) | ((bp->bio_flags & BIO_UNMAPPED) + != 0 ? CAM_DATA_BIO : 0), tag_code, - bp->bio_data, + ((bp->bio_flags & BIO_UNMAPPED) != 0) ? (void *)bp : + bp->bio_data, bp->bio_bcount, ada_default_timeout*1000); diff --git a/sys/cam/cam_ccb.h b/sys/cam/cam_ccb.h index a80880a..bcbf414 100644 --- a/sys/cam/cam_ccb.h +++ b/sys/cam/cam_ccb.h @@ -42,7 +42,6 @@ #include #include - /* General allocation length definitions for CCB structures */ #define IOCDBLEN CAM_MAX_CDBLEN /* Space for CDB bytes/pointer */ #define VUHBALEN 14 /* Vendor Unique HBA length */ @@ -572,7 +571,8 @@ typedef enum { PIM_NOINITIATOR = 0x20, /* Initiator role not supported. */ PIM_NOBUSRESET = 0x10, /* User has disabled initial BUS RESET */ PIM_NO_6_BYTE = 0x08, /* Do not send 6-byte commands */ - PIM_SEQSCAN = 0x04 /* Do bus scans sequentially, not in parallel */ + PIM_SEQSCAN = 0x04, /* Do bus scans sequentially, not in parallel */ + PIM_UNMAPPED = 0x02, } pi_miscflag; /* Path Inquiry CCB */ diff --git a/sys/cam/cam_periph.c b/sys/cam/cam_periph.c index 523e549..fa4fa04 100644 --- a/sys/cam/cam_periph.c +++ b/sys/cam/cam_periph.c @@ -734,6 +734,8 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo) case XPT_CONT_TARGET_IO: if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE) return(0); + KASSERT((ccb->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR, + ("not VADDR for SCSI_IO %p %x\n", ccb, ccb->ccb_h.flags)); data_ptrs[0] = &ccb->csio.data_ptr; lengths[0] = ccb->csio.dxfer_len; @@ -743,6 +745,8 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo) case XPT_ATA_IO: if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE) return(0); + KASSERT((ccb->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_VADDR, + ("not VADDR for ATA_IO %p %x\n", ccb, ccb->ccb_h.flags)); data_ptrs[0] = &ccb->ataio.data_ptr; lengths[0] = ccb->ataio.dxfer_len; @@ -846,7 +850,7 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo) * into a larger area of VM, or if userland races against * vmapbuf() after the useracc() check. */ - if (vmapbuf(mapinfo->bp[i]) < 0) { + if (vmapbuf(mapinfo->bp[i], 1) < 0) { for (j = 0; j < i; ++j) { *data_ptrs[j] = mapinfo->bp[j]->b_saveaddr; vunmapbuf(mapinfo->bp[j]); diff --git a/sys/cam/scsi/scsi_all.c b/sys/cam/scsi/scsi_all.c index 9dac9c0..14fb1c8 100644 --- a/sys/cam/scsi/scsi_all.c +++ b/sys/cam/scsi/scsi_all.c @@ -5771,7 +5771,9 @@ scsi_read_write(struct ccb_scsiio *csio, u_int32_t retries, cam_fill_csio(csio, retries, cbfcnp, - /*flags*/readop ? CAM_DIR_IN : CAM_DIR_OUT, + ((readop & SCSI_RW_DIRMASK) == SCSI_RW_READ ? + CAM_DIR_IN : CAM_DIR_OUT) | + ((readop & SCSI_RW_BIO) != 0 ? CAM_DATA_BIO : 0), tag_action, data_ptr, dxfer_len, diff --git a/sys/cam/scsi/scsi_all.h b/sys/cam/scsi/scsi_all.h index 0693e1c..330330d 100644 --- a/sys/cam/scsi/scsi_all.h +++ b/sys/cam/scsi/scsi_all.h @@ -2354,6 +2354,10 @@ void scsi_write_buffer(struct ccb_scsiio *csio, u_int32_t retries, uint8_t *data_ptr, uint32_t param_list_length, uint8_t sense_len, uint32_t timeout); +#define SCSI_RW_READ 0x0001 +#define SCSI_RW_WRITE 0x0002 +#define SCSI_RW_DIRMASK 0x0003 +#define SCSI_RW_BIO 0x1000 void scsi_read_write(struct ccb_scsiio *csio, u_int32_t retries, void (*cbfcnp)(struct cam_periph *, union ccb *), u_int8_t tag_action, int readop, u_int8_t byte2, diff --git a/sys/cam/scsi/scsi_cd.c b/sys/cam/scsi/scsi_cd.c index a7c4c5b..a6d340f 100644 --- a/sys/cam/scsi/scsi_cd.c +++ b/sys/cam/scsi/scsi_cd.c @@ -1575,7 +1575,8 @@ cdstart(struct cam_periph *periph, union ccb *start_ccb) /*retries*/ cd_retry_count, /* cbfcnp */ cddone, MSG_SIMPLE_Q_TAG, - /* read */bp->bio_cmd == BIO_READ, + /* read */bp->bio_cmd == BIO_READ ? + SCSI_RW_READ : SCSI_RW_WRITE, /* byte2 */ 0, /* minimum_cmd_size */ 10, /* lba */ bp->bio_offset / diff --git a/sys/cam/scsi/scsi_da.c b/sys/cam/scsi/scsi_da.c index 7854215..c886e9e 100644 --- a/sys/cam/scsi/scsi_da.c +++ b/sys/cam/scsi/scsi_da.c @@ -1180,7 +1180,7 @@ dadump(void *arg, void *virtual, vm_offset_t physical, off_t offset, size_t leng /*retries*/0, dadone, MSG_ORDERED_Q_TAG, - /*read*/FALSE, + /*read*/SCSI_RW_WRITE, /*byte2*/0, /*minimum_cmd_size*/ softc->minimum_cmd_size, offset / secsize, @@ -1753,6 +1753,8 @@ daregister(struct cam_periph *periph, void *arg) softc->disk->d_flags = 0; if ((softc->quirks & DA_Q_NO_SYNC_CACHE) == 0) softc->disk->d_flags |= DISKFLAG_CANFLUSHCACHE; + if ((cpi.hba_misc & PIM_UNMAPPED) != 0) + softc->disk->d_flags |= DISKFLAG_UNMAPPED_BIO; cam_strvis(softc->disk->d_descr, cgd->inq_data.vendor, sizeof(cgd->inq_data.vendor), sizeof(softc->disk->d_descr)); strlcat(softc->disk->d_descr, " ", sizeof(softc->disk->d_descr)); @@ -1981,14 +1983,18 @@ dastart(struct cam_periph *periph, union ccb *start_ccb) /*retries*/da_retry_count, /*cbfcnp*/dadone, /*tag_action*/tag_code, - /*read_op*/bp->bio_cmd - == BIO_READ, + /*read_op*/(bp->bio_cmd == BIO_READ ? + SCSI_RW_READ : SCSI_RW_WRITE) | + ((bp->bio_flags & BIO_UNMAPPED) != 0 ? + SCSI_RW_BIO : 0), /*byte2*/0, softc->minimum_cmd_size, /*lba*/bp->bio_pblkno, /*block_count*/bp->bio_bcount / softc->params.secsize, - /*data_ptr*/ bp->bio_data, + /*data_ptr*/ (bp->bio_flags & + BIO_UNMAPPED) != 0 ? (void *)bp : + bp->bio_data, /*dxfer_len*/ bp->bio_bcount, /*sense_len*/SSD_FULL_SIZE, da_default_timeout * 1000); diff --git a/sys/dev/ahci/ahci.c b/sys/dev/ahci/ahci.c index 8e692bd..d03c8af 100644 --- a/sys/dev/ahci/ahci.c +++ b/sys/dev/ahci/ahci.c @@ -2903,7 +2903,7 @@ ahciaction(struct cam_sim *sim, union ccb *ccb) if (ch->caps & AHCI_CAP_SPM) cpi->hba_inquiry |= PI_SATAPM; cpi->target_sprt = 0; - cpi->hba_misc = PIM_SEQSCAN; + cpi->hba_misc = PIM_SEQSCAN | PIM_UNMAPPED; cpi->hba_eng_cnt = 0; if (ch->caps & AHCI_CAP_SPM) cpi->max_target = 15; diff --git a/sys/dev/md/md.c b/sys/dev/md/md.c index b72f294..96e6d93 100644 --- a/sys/dev/md/md.c +++ b/sys/dev/md/md.c @@ -110,6 +110,19 @@ static int md_malloc_wait; SYSCTL_INT(_vm, OID_AUTO, md_malloc_wait, CTLFLAG_RW, &md_malloc_wait, 0, "Allow malloc to wait for memory allocations"); +static int md_unmapped_swap; +SYSCTL_INT(_debug, OID_AUTO, md_unmapped_swap, CTLFLAG_RD, + &md_unmapped_swap, 0, + ""); +static int md_unmapped_vnode; +SYSCTL_INT(_debug, OID_AUTO, md_unmapped_vnode, CTLFLAG_RD, + &md_unmapped_vnode, 0, + ""); +static int md_unmapped_malloc; +SYSCTL_INT(_debug, OID_AUTO, md_unmapped_malloc, CTLFLAG_RD, + &md_unmapped_malloc, 0, + ""); + #if defined(MD_ROOT) && !defined(MD_ROOT_FSTYPE) #define MD_ROOT_FSTYPE "ufs" #endif @@ -414,13 +427,103 @@ g_md_start(struct bio *bp) wakeup(sc); } +#define MD_MALLOC_MOVE_ZERO 1 +#define MD_MALLOC_MOVE_FILL 2 +#define MD_MALLOC_MOVE_READ 3 +#define MD_MALLOC_MOVE_WRITE 4 +#define MD_MALLOC_MOVE_CMP 5 + +static int +md_malloc_move(vm_page_t **mp, vm_offset_t *ma_offs, unsigned sectorsize, + void *ptr, u_char fill, int op) +{ + struct sf_buf *sf; + vm_page_t m, *mp1; + char *p, first; + vm_offset_t ma_offs1; + off_t *uc; + unsigned n; + int error, i, sz, first_read; + + m = NULL; + error = 0; + sf = NULL; + /* if (op == MD_MALLOC_MOVE_CMP) { gcc */ + first = 0; + first_read = 0; + uc = ptr; + mp1 = *mp; + ma_offs1 = *ma_offs; + /* } */ + sched_pin(); + for (n = sectorsize; n != 0; n -= sz) { + sz = imin(PAGE_SIZE - *ma_offs, n); + if (m != **mp) { + if (sf != NULL) + sf_buf_free(sf); + m = **mp; + sf = sf_buf_alloc(m, SFB_CPUPRIVATE | + (md_malloc_wait ? 0 : SFB_NOWAIT)); + if (sf == NULL) { + error = ENOMEM; + break; + } + } + p = (char *)sf_buf_kva(sf) + *ma_offs; + switch (op) { + case MD_MALLOC_MOVE_ZERO: + bzero(p, sz); + break; + case MD_MALLOC_MOVE_FILL: + memset(p, fill, sz); + break; + case MD_MALLOC_MOVE_READ: + bcopy(ptr, p, sz); + cpu_flush_dcache(p, sz); + break; + case MD_MALLOC_MOVE_WRITE: + bcopy(p, ptr, sz); + break; + case MD_MALLOC_MOVE_CMP: + for (i = 0; i < sz; i++, p++) { + if (!first_read) { + *uc = *p; + first = *p; + first_read = 1; + } else if (*p != first) { + error = EDOOFUS; + break; + } + } + break; + } + if (error != 0) + break; + *ma_offs += sz; + *ma_offs %= PAGE_SIZE; + if (*ma_offs == 0) + (*mp)++; + } + + if (sf != NULL) + sf_buf_free(sf); + sched_unpin(); + if (op == MD_MALLOC_MOVE_CMP && error != 0) { + *mp = mp1; + *ma_offs = ma_offs1; + } + return (error); +} + static int mdstart_malloc(struct md_s *sc, struct bio *bp) { - int i, error; u_char *dst; + vm_page_t *m; + int i, error, error1, notmapped; off_t secno, nsec, uc; uintptr_t sp, osp; + vm_offset_t ma_offs; switch (bp->bio_cmd) { case BIO_READ: @@ -431,9 +534,17 @@ mdstart_malloc(struct md_s *sc, struct bio *bp) return (EOPNOTSUPP); } + notmapped = (bp->bio_flags & BIO_UNMAPPED) != 0; + if (notmapped) { + m = bp->bio_ma; + ma_offs = bp->bio_ma_offset; + dst = NULL; + } else { + dst = bp->bio_data; + } + nsec = bp->bio_length / sc->sectorsize; secno = bp->bio_offset / sc->sectorsize; - dst = bp->bio_data; error = 0; while (nsec--) { osp = s_read(sc->indir, secno); @@ -441,21 +552,45 @@ mdstart_malloc(struct md_s *sc, struct bio *bp) if (osp != 0) error = s_write(sc->indir, secno, 0); } else if (bp->bio_cmd == BIO_READ) { - if (osp == 0) - bzero(dst, sc->sectorsize); - else if (osp <= 255) - memset(dst, osp, sc->sectorsize); - else { - bcopy((void *)osp, dst, sc->sectorsize); - cpu_flush_dcache(dst, sc->sectorsize); + if (osp == 0) { + if (notmapped) { + error = md_malloc_move(&m, &ma_offs, + sc->sectorsize, NULL, 0, + MD_MALLOC_MOVE_ZERO); + } else + bzero(dst, sc->sectorsize); + } else if (osp <= 255) { + if (notmapped) { + error = md_malloc_move(&m, &ma_offs, + sc->sectorsize, NULL, osp, + MD_MALLOC_MOVE_FILL); + } else + memset(dst, osp, sc->sectorsize); + } else { + if (notmapped) { + error = md_malloc_move(&m, &ma_offs, + sc->sectorsize, (void *)osp, 0, + MD_MALLOC_MOVE_READ); + } else { + bcopy((void *)osp, dst, sc->sectorsize); + cpu_flush_dcache(dst, sc->sectorsize); + } } osp = 0; } else if (bp->bio_cmd == BIO_WRITE) { if (sc->flags & MD_COMPRESS) { - uc = dst[0]; - for (i = 1; i < sc->sectorsize; i++) - if (dst[i] != uc) - break; + if (notmapped) { + error1 = md_malloc_move(&m, &ma_offs, + sc->sectorsize, &uc, 0, + MD_MALLOC_MOVE_CMP); + i = error1 == 0 ? sc->sectorsize : 0; + } else { + uc = dst[0]; + for (i = 1; i < sc->sectorsize; i++) { + if (dst[i] != uc) + break; + } + } } else { i = 0; uc = 0; @@ -472,10 +607,26 @@ mdstart_malloc(struct md_s *sc, struct bio *bp) error = ENOSPC; break; } - bcopy(dst, (void *)sp, sc->sectorsize); + if (notmapped) { + error = md_malloc_move(&m, + &ma_offs, sc->sectorsize, + (void *)sp, 0, + MD_MALLOC_MOVE_WRITE); + } else { + bcopy(dst, (void *)sp, + sc->sectorsize); + } error = s_write(sc->indir, secno, sp); } else { - bcopy(dst, (void *)osp, sc->sectorsize); + if (notmapped) { + error = md_malloc_move(&m, + &ma_offs, sc->sectorsize, + (void *)osp, 0, + MD_MALLOC_MOVE_WRITE); + } else { + bcopy(dst, (void *)osp, + sc->sectorsize); + } osp = 0; } } @@ -487,7 +638,8 @@ mdstart_malloc(struct md_s *sc, struct bio *bp) if (error != 0) break; secno++; - dst += sc->sectorsize; + if (!notmapped) + dst += sc->sectorsize; } bp->bio_resid = 0; return (error); @@ -628,11 +780,10 @@ mdstart_vnode(struct md_s *sc, struct bio *bp) static int mdstart_swap(struct md_s *sc, struct bio *bp) { - struct sf_buf *sf; - int rv, offs, len, lastend; - vm_pindex_t i, lastp; vm_page_t m; u_char *p; + vm_pindex_t i, lastp; + int rv, ma_offs, offs, len, lastend; switch (bp->bio_cmd) { case BIO_READ: @@ -644,6 +795,12 @@ mdstart_swap(struct md_s *sc, struct bio *bp) } p = bp->bio_data; + if ((bp->bio_flags & BIO_UNMAPPED) == 0) { + ma_offs = 0; + } else { + atomic_add_int(&md_unmapped_swap, 1); + ma_offs = bp->bio_ma_offset; + } /* * offs is the offset at which to start operating on the @@ -661,19 +818,12 @@ mdstart_swap(struct md_s *sc, struct bio *bp) vm_object_pip_add(sc->object, 1); for (i = bp->bio_offset / PAGE_SIZE; i <= lastp; i++) { len = ((i == lastp) ? lastend : PAGE_SIZE) - offs; - - m = vm_page_grab(sc->object, i, - VM_ALLOC_NORMAL|VM_ALLOC_RETRY); - VM_OBJECT_UNLOCK(sc->object); - sched_pin(); - sf = sf_buf_alloc(m, SFB_CPUPRIVATE); - VM_OBJECT_LOCK(sc->object); + m = vm_page_grab(sc->object, i, VM_ALLOC_NORMAL | + VM_ALLOC_RETRY); if (bp->bio_cmd == BIO_READ) { if (m->valid != VM_PAGE_BITS_ALL) rv = vm_pager_get_pages(sc->object, &m, 1, 0); if (rv == VM_PAGER_ERROR) { - sf_buf_free(sf); - sched_unpin(); vm_page_wakeup(m); break; } else if (rv == VM_PAGER_FAIL) { @@ -683,40 +833,44 @@ mdstart_swap(struct md_s *sc, struct bio *bp) * valid. Do not set dirty, the page * can be recreated if thrown out. */ - bzero((void *)sf_buf_kva(sf), PAGE_SIZE); + pmap_zero_page(m); m->valid = VM_PAGE_BITS_ALL; } - bcopy((void *)(sf_buf_kva(sf) + offs), p, len); - cpu_flush_dcache(p, len); + if ((bp->bio_flags & BIO_UNMAPPED) != 0) { + pmap_copy_pages(&m, offs, bp->bio_ma, + ma_offs, len); + } else { + physcopyout(VM_PAGE_TO_PHYS(m) + offs, p, len); + cpu_flush_dcache(p, len); + } } else if (bp->bio_cmd == BIO_WRITE) { if (len != PAGE_SIZE && m->valid != VM_PAGE_BITS_ALL) rv = vm_pager_get_pages(sc->object, &m, 1, 0); if (rv == VM_PAGER_ERROR) { - sf_buf_free(sf); - sched_unpin(); vm_page_wakeup(m); break; } - bcopy(p, (void *)(sf_buf_kva(sf) + offs), len); + if ((bp->bio_flags & BIO_UNMAPPED) != 0) { + pmap_copy_pages(bp->bio_ma, ma_offs, &m, + offs, len); + } else { + physcopyin(p, VM_PAGE_TO_PHYS(m) + offs, len); + } m->valid = VM_PAGE_BITS_ALL; } else if (bp->bio_cmd == BIO_DELETE) { if (len != PAGE_SIZE && m->valid != VM_PAGE_BITS_ALL) rv = vm_pager_get_pages(sc->object, &m, 1, 0); if (rv == VM_PAGER_ERROR) { - sf_buf_free(sf); - sched_unpin(); vm_page_wakeup(m); break; } if (len != PAGE_SIZE) { - bzero((void *)(sf_buf_kva(sf) + offs), len); + pmap_zero_page_area(m, offs, len); vm_page_clear_dirty(m, offs, len); m->valid = VM_PAGE_BITS_ALL; } else vm_pager_page_unswapped(m); } - sf_buf_free(sf); - sched_unpin(); vm_page_wakeup(m); vm_page_lock(m); if (bp->bio_cmd == BIO_DELETE && len == PAGE_SIZE) @@ -730,6 +884,7 @@ mdstart_swap(struct md_s *sc, struct bio *bp) /* Actions on further pages start at offset 0 */ p += PAGE_SIZE - offs; offs = 0; + ma_offs += len; } vm_object_pip_subtract(sc->object, 1); VM_OBJECT_UNLOCK(sc->object); @@ -845,6 +1000,14 @@ mdinit(struct md_s *sc) pp = g_new_providerf(gp, "md%d", sc->unit); pp->mediasize = sc->mediasize; pp->sectorsize = sc->sectorsize; + switch (sc->type) { + case MD_SWAP: + case MD_MALLOC: + pp->flags |= G_PF_ACCEPT_UNMAPPED; + break; + default: + break; + } sc->gp = gp; sc->pp = pp; g_error_provider(pp, 0); diff --git a/sys/fs/cd9660/cd9660_vnops.c b/sys/fs/cd9660/cd9660_vnops.c index 21ee0fc..47d4f75 100644 --- a/sys/fs/cd9660/cd9660_vnops.c +++ b/sys/fs/cd9660/cd9660_vnops.c @@ -329,7 +329,7 @@ cd9660_read(ap) if (lblktosize(imp, rablock) < ip->i_size) error = cluster_read(vp, (off_t)ip->i_size, lbn, size, NOCRED, uio->uio_resid, - (ap->a_ioflag >> 16), &bp); + (ap->a_ioflag >> 16), 0, &bp); else error = bread(vp, lbn, size, NOCRED, &bp); } else { diff --git a/sys/fs/ext2fs/ext2_balloc.c b/sys/fs/ext2fs/ext2_balloc.c index 1c0cc0e..88ad710 100644 --- a/sys/fs/ext2fs/ext2_balloc.c +++ b/sys/fs/ext2fs/ext2_balloc.c @@ -276,7 +276,7 @@ ext2_balloc(struct inode *ip, int32_t lbn, int size, struct ucred *cred, if (seqcount && (vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { error = cluster_read(vp, ip->i_size, lbn, (int)fs->e2fs_bsize, NOCRED, - MAXBSIZE, seqcount, &nbp); + MAXBSIZE, seqcount, 0, &nbp); } else { error = bread(vp, lbn, (int)fs->e2fs_bsize, NOCRED, &nbp); } diff --git a/sys/fs/ext2fs/ext2_vnops.c b/sys/fs/ext2fs/ext2_vnops.c index 1c0b7a1..77eb74b 100644 --- a/sys/fs/ext2fs/ext2_vnops.c +++ b/sys/fs/ext2fs/ext2_vnops.c @@ -1618,10 +1618,11 @@ ext2_read(struct vop_read_args *ap) if (lblktosize(fs, nextlbn) >= ip->i_size) error = bread(vp, lbn, size, NOCRED, &bp); - else if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) + else if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { error = cluster_read(vp, ip->i_size, lbn, size, - NOCRED, blkoffset + uio->uio_resid, seqcount, &bp); - else if (seqcount > 1) { + NOCRED, blkoffset + uio->uio_resid, seqcount, + 0, &bp); + } else if (seqcount > 1) { int nextsize = blksize(fs, ip, nextlbn); error = breadn(vp, lbn, size, &nextlbn, &nextsize, 1, NOCRED, &bp); @@ -1831,7 +1832,7 @@ ext2_write(struct vop_write_args *ap) } else if (xfersize + blkoffset == fs->e2fs_fsize) { if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERW) == 0) { bp->b_flags |= B_CLUSTEROK; - cluster_write(vp, bp, ip->i_size, seqcount); + cluster_write(vp, bp, ip->i_size, seqcount, 0); } else { bawrite(bp); } diff --git a/sys/fs/msdosfs/msdosfs_vnops.c b/sys/fs/msdosfs/msdosfs_vnops.c index 8e045cb..213ae81 100644 --- a/sys/fs/msdosfs/msdosfs_vnops.c +++ b/sys/fs/msdosfs/msdosfs_vnops.c @@ -600,7 +600,7 @@ msdosfs_read(ap) error = bread(vp, lbn, blsize, NOCRED, &bp); } else if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { error = cluster_read(vp, dep->de_FileSize, lbn, blsize, - NOCRED, on + uio->uio_resid, seqcount, &bp); + NOCRED, on + uio->uio_resid, seqcount, 0, &bp); } else if (seqcount > 1) { rasize = blsize; error = breadn(vp, lbn, @@ -820,7 +820,7 @@ msdosfs_write(ap) else if (n + croffset == pmp->pm_bpcluster) { if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERW) == 0) cluster_write(vp, bp, dep->de_FileSize, - seqcount); + seqcount, 0); else bawrite(bp); } else diff --git a/sys/fs/udf/udf_vnops.c b/sys/fs/udf/udf_vnops.c index b1a3b1d..abe073e 100644 --- a/sys/fs/udf/udf_vnops.c +++ b/sys/fs/udf/udf_vnops.c @@ -478,8 +478,9 @@ udf_read(struct vop_read_args *ap) rablock = lbn + 1; if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { if (lblktosize(udfmp, rablock) < fsize) { - error = cluster_read(vp, fsize, lbn, size, NOCRED, - uio->uio_resid, (ap->a_ioflag >> 16), &bp); + error = cluster_read(vp, fsize, lbn, size, + NOCRED, uio->uio_resid, + (ap->a_ioflag >> 16), 0, &bp); } else { error = bread(vp, lbn, size, NOCRED, &bp); } diff --git a/sys/geom/geom.h b/sys/geom/geom.h index 351b05d..660bf6e 100644 --- a/sys/geom/geom.h +++ b/sys/geom/geom.h @@ -205,6 +205,7 @@ struct g_provider { u_int flags; #define G_PF_WITHER 0x2 #define G_PF_ORPHAN 0x4 +#define G_PF_ACCEPT_UNMAPPED 0x8 /* Two fields for the implementing class to use */ void *private; diff --git a/sys/geom/geom_disk.c b/sys/geom/geom_disk.c index 72e9162..7fec9da 100644 --- a/sys/geom/geom_disk.c +++ b/sys/geom/geom_disk.c @@ -320,13 +320,29 @@ g_disk_start(struct bio *bp) do { bp2->bio_offset += off; bp2->bio_length -= off; - bp2->bio_data += off; + if ((bp->bio_flags & BIO_UNMAPPED) == 0) { + bp2->bio_data += off; + } else { + KASSERT((dp->d_flags & DISKFLAG_UNMAPPED_BIO) + != 0, + ("unmapped bio not supported by disk %s", + dp->d_name)); + bp2->bio_ma += off / PAGE_SIZE; + bp2->bio_ma_offset += off; + bp2->bio_ma_offset %= PAGE_SIZE; + bp2->bio_ma_n -= off / PAGE_SIZE; + } if (bp2->bio_length > dp->d_maxsize) { /* * XXX: If we have a stripesize we should really * use it here. */ bp2->bio_length = dp->d_maxsize; + if ((bp->bio_flags & BIO_UNMAPPED) != 0) { + bp2->bio_ma_n = howmany( + bp2->bio_ma_offset + + bp2->bio_length, PAGE_SIZE); + } off += dp->d_maxsize; /* * To avoid a race, we need to grab the next bio @@ -488,6 +504,8 @@ g_disk_create(void *arg, int flag) pp->sectorsize = dp->d_sectorsize; pp->stripeoffset = dp->d_stripeoffset; pp->stripesize = dp->d_stripesize; + if ((dp->d_flags & DISKFLAG_UNMAPPED_BIO) != 0) + pp->flags |= G_PF_ACCEPT_UNMAPPED; if (bootverbose) printf("GEOM: new disk %s\n", gp->name); sysctl_ctx_init(&sc->sysctl_ctx); diff --git a/sys/geom/geom_disk.h b/sys/geom/geom_disk.h index 33d8eb2..246fc49 100644 --- a/sys/geom/geom_disk.h +++ b/sys/geom/geom_disk.h @@ -103,6 +103,7 @@ struct disk { #define DISKFLAG_OPEN 0x2 #define DISKFLAG_CANDELETE 0x4 #define DISKFLAG_CANFLUSHCACHE 0x8 +#define DISKFLAG_UNMAPPED_BIO 0x10 struct disk *disk_alloc(void); void disk_create(struct disk *disk, int version); diff --git a/sys/geom/geom_io.c b/sys/geom/geom_io.c index c6a5da8..4c84bcc 100644 --- a/sys/geom/geom_io.c +++ b/sys/geom/geom_io.c @@ -44,6 +44,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include @@ -51,6 +52,13 @@ __FBSDID("$FreeBSD$"); #include #include +#include +#include +#include +#include +#include +#include +#include static struct g_bioq g_bio_run_down; static struct g_bioq g_bio_run_up; @@ -180,12 +188,17 @@ g_clone_bio(struct bio *bp) /* * BIO_ORDERED flag may be used by disk drivers to enforce * ordering restrictions, so this flag needs to be cloned. + * BIO_UNMAPPED should be inherited, to properly indicate + * which way the buffer is passed. * Other bio flags are not suitable for cloning. */ - bp2->bio_flags = bp->bio_flags & BIO_ORDERED; + bp2->bio_flags = bp->bio_flags & (BIO_ORDERED | BIO_UNMAPPED); bp2->bio_length = bp->bio_length; bp2->bio_offset = bp->bio_offset; bp2->bio_data = bp->bio_data; + bp2->bio_ma = bp->bio_ma; + bp2->bio_ma_n = bp->bio_ma_n; + bp2->bio_ma_offset = bp->bio_ma_offset; bp2->bio_attribute = bp->bio_attribute; /* Inherit classification info from the parent */ bp2->bio_classifier1 = bp->bio_classifier1; @@ -210,11 +223,15 @@ g_duplicate_bio(struct bio *bp) struct bio *bp2; bp2 = uma_zalloc(biozone, M_WAITOK | M_ZERO); + bp2->bio_flags = bp->bio_flags & BIO_UNMAPPED; bp2->bio_parent = bp; bp2->bio_cmd = bp->bio_cmd; bp2->bio_length = bp->bio_length; bp2->bio_offset = bp->bio_offset; bp2->bio_data = bp->bio_data; + bp2->bio_ma = bp->bio_ma; + bp2->bio_ma_n = bp->bio_ma_n; + bp2->bio_ma_offset = bp->bio_ma_offset; bp2->bio_attribute = bp->bio_attribute; bp->bio_children++; #ifdef KTR @@ -575,6 +592,76 @@ g_io_deliver(struct bio *bp, int error) return; } +SYSCTL_DECL(_kern_geom); + +static long transient_maps; +SYSCTL_LONG(_kern_geom, OID_AUTO, transient_maps, CTLFLAG_RD, + &transient_maps, 0, + ""); +int transient_map_retries; +SYSCTL_INT(_kern_geom, OID_AUTO, transient_map_retries, CTLFLAG_RD, + &transient_map_retries, 0, + ""); +int transient_map_failures; +SYSCTL_INT(_kern_geom, OID_AUTO, transient_map_failures, CTLFLAG_RD, + &transient_map_failures, 0, + ""); +int inflight_transient_maps; +SYSCTL_INT(_kern_geom, OID_AUTO, inflight_transient_maps, CTLFLAG_RD, + &inflight_transient_maps, 0, + ""); + +static int +g_io_transient_map_bio(struct bio *bp) +{ + vm_offset_t addr; + long size; + int retried, rv; + + size = round_page(bp->bio_ma_offset + bp->bio_length); + KASSERT(size / PAGE_SIZE == bp->bio_ma_n, ("Bio too short %p", bp)); + addr = 0; + retried = 0; + atomic_add_long(&transient_maps, 1); +retry: + vm_map_lock(bio_transient_map); + if (vm_map_findspace(bio_transient_map, vm_map_min(bio_transient_map), + size, &addr)) { + vm_map_unlock(bio_transient_map); + if (retried >= 3) { + g_io_deliver(bp, EDEADLK/* XXXKIB */); + CTR2(KTR_GEOM, "g_down cannot map bp %p provider %s", + bp, bp->bio_to->name); + atomic_add_int(&transient_map_failures, 1); + return (1); + } else { + /* + * Naive attempt to quisce the I/O to get more + * in-flight requests completed and defragment + * the bio_transient_map. + */ + CTR3(KTR_GEOM, "g_down retrymap bp %p provider %s r %d", + bp, bp->bio_to->name, retried); + pause("g_d_tra", hz / 10); + retried++; + atomic_add_int(&transient_map_retries, 1); + goto retry; + } + } + rv = vm_map_insert(bio_transient_map, NULL, 0, addr, addr + size, + VM_PROT_RW, VM_PROT_RW, MAP_NOFAULT); + KASSERT(rv == KERN_SUCCESS, + ("vm_map_insert(bio_transient_map) rv %d %jx %lx", + rv, (uintmax_t)addr, size)); + vm_map_unlock(bio_transient_map); + atomic_add_int(&inflight_transient_maps, 1); + pmap_qenter((vm_offset_t)addr, bp->bio_ma, OFF_TO_IDX(size)); + bp->bio_data = (caddr_t)addr + bp->bio_ma_offset; + bp->bio_flags |= BIO_TRANSIENT_MAPPING; + bp->bio_flags &= ~BIO_UNMAPPED; + return (0); +} + void g_io_schedule_down(struct thread *tp __unused) { @@ -636,6 +723,12 @@ g_io_schedule_down(struct thread *tp __unused) default: break; } + if ((bp->bio_flags & BIO_UNMAPPED) != 0 && + (bp->bio_to->flags & G_PF_ACCEPT_UNMAPPED) == 0 && + (bp->bio_cmd == BIO_READ || bp->bio_cmd == BIO_WRITE)) { + if (g_io_transient_map_bio(bp)) + continue; + } THREAD_NO_SLEEPING(); CTR4(KTR_GEOM, "g_down starting bp %p provider %s off %ld " "len %ld", bp, bp->bio_to->name, bp->bio_offset, diff --git a/sys/geom/geom_vfs.c b/sys/geom/geom_vfs.c index bbed550..92f1ad2 100644 --- a/sys/geom/geom_vfs.c +++ b/sys/geom/geom_vfs.c @@ -188,14 +188,14 @@ g_vfs_strategy(struct bufobj *bo, struct buf *bp) bip = g_alloc_bio(); bip->bio_cmd = bp->b_iocmd; bip->bio_offset = bp->b_iooffset; - bip->bio_data = bp->b_data; - bip->bio_done = g_vfs_done; - bip->bio_caller2 = bp; bip->bio_length = bp->b_bcount; - if (bp->b_flags & B_BARRIER) { + bdata2bio(bp, bip); + if ((bp->b_flags & B_BARRIER) != 0) { bip->bio_flags |= BIO_ORDERED; bp->b_flags &= ~B_BARRIER; } + bip->bio_done = g_vfs_done; + bip->bio_caller2 = bp; g_io_request(bip, cp); } diff --git a/sys/geom/part/g_part.c b/sys/geom/part/g_part.c index e2ba79e..7650499 100644 --- a/sys/geom/part/g_part.c +++ b/sys/geom/part/g_part.c @@ -427,6 +427,7 @@ g_part_new_provider(struct g_geom *gp, struct g_part_table *table, entry->gpe_pp->stripeoffset = pp->stripeoffset + entry->gpe_offset; if (pp->stripesize > 0) entry->gpe_pp->stripeoffset %= pp->stripesize; + entry->gpe_pp->flags |= pp->flags & G_PF_ACCEPT_UNMAPPED; g_error_provider(entry->gpe_pp, 0); } diff --git a/sys/i386/i386/pmap.c b/sys/i386/i386/pmap.c index 5fee565..27548dc 100644 --- a/sys/i386/i386/pmap.c +++ b/sys/i386/i386/pmap.c @@ -4240,6 +4240,49 @@ pmap_copy_page(vm_page_t src, vm_page_t dst) mtx_unlock(&sysmaps->lock); } +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + struct sysmaps *sysmaps; + vm_page_t a_pg, b_pg; + char *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; + mtx_lock(&sysmaps->lock); + if (*sysmaps->CMAP1 != 0) + panic("pmap_copy_pages: CMAP1 busy"); + if (*sysmaps->CMAP2 != 0) + panic("pmap_copy_pages: CMAP2 busy"); + sched_pin(); + while (xfersize > 0) { + invlpg((u_int)sysmaps->CADDR1); + invlpg((u_int)sysmaps->CADDR2); + a_pg = ma[a_offset >> PAGE_SHIFT]; + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + b_pg = mb[b_offset >> PAGE_SHIFT]; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + *sysmaps->CMAP1 = PG_V | VM_PAGE_TO_PHYS(a_pg) | PG_A | + pmap_cache_bits(b_pg->md.pat_mode, 0); + *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(b_pg) | PG_A | + PG_M | pmap_cache_bits(b_pg->md.pat_mode, 0); + a_cp = sysmaps->CADDR1 + a_pg_offset; + b_cp = sysmaps->CADDR2 + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } + *sysmaps->CMAP1 = 0; + *sysmaps->CMAP2 = 0; + sched_unpin(); + mtx_unlock(&sysmaps->lock); +} + /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may diff --git a/sys/i386/xen/pmap.c b/sys/i386/xen/pmap.c index a8f11a4..28ba21b 100644 --- a/sys/i386/xen/pmap.c +++ b/sys/i386/xen/pmap.c @@ -3448,6 +3448,46 @@ pmap_copy_page(vm_page_t src, vm_page_t dst) mtx_unlock(&sysmaps->lock); } +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + struct sysmaps *sysmaps; + vm_page_t a_pg, b_pg; + char *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)]; + mtx_lock(&sysmaps->lock); + if (*sysmaps->CMAP1 != 0) + panic("pmap_copy_pages: CMAP1 busy"); + if (*sysmaps->CMAP2 != 0) + panic("pmap_copy_pages: CMAP2 busy"); + sched_pin(); + while (xfersize > 0) { + a_pg = ma[a_offset >> PAGE_SHIFT]; + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + b_pg = mb[b_offset >> PAGE_SHIFT]; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + PT_SET_MA(sysmaps->CADDR1, PG_V | VM_PAGE_TO_MACH(a_pg) | PG_A); + PT_SET_MA(sysmaps->CADDR2, PG_V | PG_RW | + VM_PAGE_TO_MACH(b_pg) | PG_A | PG_M); + a_cp = sysmaps->CADDR1 + a_pg_offset; + b_cp = sysmaps->CADDR2 + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } + PT_SET_MA(sysmaps->CADDR1, 0); + PT_SET_MA(sysmaps->CADDR2, 0); + sched_unpin(); + mtx_unlock(&sysmaps->lock); +} + /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may diff --git a/sys/ia64/ia64/pmap.c b/sys/ia64/ia64/pmap.c index 594f8c6..28610c6 100644 --- a/sys/ia64/ia64/pmap.c +++ b/sys/ia64/ia64/pmap.c @@ -2014,6 +2014,30 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) bcopy(src, dst, PAGE_SIZE); } +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + void *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + a_cp = (char *)pmap_page_to_va(ma[a_offset >> PAGE_SHIFT]) + + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + b_cp = (char *)pmap_page_to_va(mb[b_offset >> PAGE_SHIFT]) + + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } +} + /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may diff --git a/sys/kern/kern_physio.c b/sys/kern/kern_physio.c index 34072f3..922ebb6 100644 --- a/sys/kern/kern_physio.c +++ b/sys/kern/kern_physio.c @@ -92,7 +92,7 @@ physio(struct cdev *dev, struct uio *uio, int ioflag) bp->b_blkno = btodb(bp->b_offset); if (uio->uio_segflg == UIO_USERSPACE) - if (vmapbuf(bp) < 0) { + if (vmapbuf(bp, 0) < 0) { error = EFAULT; goto doerror; } diff --git a/sys/kern/subr_bus_dma.c b/sys/kern/subr_bus_dma.c index 773d01a..1ca1f89 100644 --- a/sys/kern/subr_bus_dma.c +++ b/sys/kern/subr_bus_dma.c @@ -126,11 +126,27 @@ static int _bus_dmamap_load_bio(bus_dma_tag_t dmat, bus_dmamap_t map, struct bio *bio, int *nsegs, int flags) { - int error; + vm_paddr_t paddr; + bus_size_t len, tlen; + int error, i, ma_offs; - error = _bus_dmamap_load_buffer(dmat, map, bio->bio_data, - bio->bio_bcount, kernel_pmap, flags, NULL, nsegs); + if ((bio->bio_flags & BIO_UNMAPPED) == 0) { + error = _bus_dmamap_load_buffer(dmat, map, bio->bio_data, + bio->bio_bcount, kernel_pmap, flags, NULL, nsegs); + return (error); + } + tlen = bio->bio_bcount; + ma_offs = bio->bio_ma_offset; + for (i = 0; tlen > 0; i++, tlen -= len) { + len = min(PAGE_SIZE - ma_offs, tlen); + paddr = VM_PAGE_TO_PHYS(bio->bio_ma[i]) + ma_offs; + error = _bus_dmamap_load_phys(dmat, map, paddr, len, + flags, NULL, nsegs); + if (error != 0) + break; + ma_offs = 0; + } return (error); } diff --git a/sys/kern/subr_param.c b/sys/kern/subr_param.c index f36c769..1fb344e 100644 --- a/sys/kern/subr_param.c +++ b/sys/kern/subr_param.c @@ -91,6 +91,7 @@ int maxfilesperproc; /* per-proc open files limit */ int msgbufsize; /* size of kernel message buffer */ int ncallout; /* maximum # of timer events */ int nbuf; +int bio_transient_maxcnt; int ngroups_max; /* max # groups per process */ int nswbuf; pid_t pid_max = PID_MAX; @@ -119,6 +120,9 @@ SYSCTL_LONG(_kern, OID_AUTO, maxswzone, CTLFLAG_RDTUN, &maxswzone, 0, "Maximum memory for swap metadata"); SYSCTL_LONG(_kern, OID_AUTO, maxbcache, CTLFLAG_RDTUN, &maxbcache, 0, "Maximum value of vfs.maxbufspace"); +SYSCTL_INT(_kern, OID_AUTO, bio_transient_maxcnt, CTLFLAG_RDTUN, + &bio_transient_maxcnt, 0, + "Maximum number of transient BIOs mappings"); SYSCTL_ULONG(_kern, OID_AUTO, maxtsiz, CTLFLAG_RW | CTLFLAG_TUN, &maxtsiz, 0, "Maximum text size"); SYSCTL_ULONG(_kern, OID_AUTO, dfldsiz, CTLFLAG_RW | CTLFLAG_TUN, &dfldsiz, 0, @@ -321,6 +325,7 @@ init_param2(long physpages) */ nbuf = NBUF; TUNABLE_INT_FETCH("kern.nbuf", &nbuf); + TUNABLE_INT_FETCH("kern.bio_transient_maxcnt", &bio_transient_maxcnt); /* * XXX: Does the callout wheel have to be so big? diff --git a/sys/kern/vfs_aio.c b/sys/kern/vfs_aio.c index 99b0197..ae6ae8e 100644 --- a/sys/kern/vfs_aio.c +++ b/sys/kern/vfs_aio.c @@ -1322,7 +1322,7 @@ aio_qphysio(struct proc *p, struct aiocblist *aiocbe) /* * Bring buffer into kernel space. */ - if (vmapbuf(bp) < 0) { + if (vmapbuf(bp, 1) < 0) { error = EFAULT; goto doerror; } diff --git a/sys/kern/vfs_bio.c b/sys/kern/vfs_bio.c index 6393399..83d3609 100644 --- a/sys/kern/vfs_bio.c +++ b/sys/kern/vfs_bio.c @@ -91,6 +91,7 @@ struct buf_ops buf_ops_bio = { * carnal knowledge of buffers. This knowledge should be moved to vfs_bio.c. */ struct buf *buf; /* buffer header pool */ +caddr_t unmapped_buf; static struct proc *bufdaemonproc; @@ -131,6 +132,10 @@ SYSCTL_PROC(_vfs, OID_AUTO, bufspace, CTLTYPE_LONG|CTLFLAG_MPSAFE|CTLFLAG_RD, SYSCTL_LONG(_vfs, OID_AUTO, bufspace, CTLFLAG_RD, &bufspace, 0, "Virtual memory used for buffers"); #endif +static long unmapped_bufspace; +SYSCTL_LONG(_vfs, OID_AUTO, unmapped_bufspace, CTLFLAG_RD, + &unmapped_bufspace, 0, + "Amount of unmapped buffers, inclusive in the bufspace"); static long maxbufspace; SYSCTL_LONG(_vfs, OID_AUTO, maxbufspace, CTLFLAG_RD, &maxbufspace, 0, "Maximum allowed value of bufspace (including buf_daemon)"); @@ -200,6 +205,10 @@ SYSCTL_INT(_vfs, OID_AUTO, getnewbufcalls, CTLFLAG_RW, &getnewbufcalls, 0, static int getnewbufrestarts; SYSCTL_INT(_vfs, OID_AUTO, getnewbufrestarts, CTLFLAG_RW, &getnewbufrestarts, 0, "Number of times getnewbuf has had to restart a buffer aquisition"); +static int mappingrestarts; +SYSCTL_INT(_vfs, OID_AUTO, mappingrestarts, CTLFLAG_RW, &mappingrestarts, 0, + "Number of times getblk has had to restart a buffer mapping for " + "unmapped buffer"); static int flushbufqtarget = 100; SYSCTL_INT(_vfs, OID_AUTO, flushbufqtarget, CTLFLAG_RW, &flushbufqtarget, 0, "Amount of work to do in flushbufqueues when helping bufdaemon"); @@ -280,6 +289,9 @@ static struct mtx nblock; /* Queues for free buffers with various properties */ static TAILQ_HEAD(bqueues, buf) bufqueues[BUFFER_QUEUES] = { { 0 } }; +#ifdef INVARIANTS +static int bq_len[BUFFER_QUEUES]; +#endif /* Lock for the bufqueues */ static struct mtx bqlock; @@ -510,7 +522,7 @@ caddr_t kern_vfs_bio_buffer_alloc(caddr_t v, long physmem_est) { int tuned_nbuf; - long maxbuf; + long maxbuf, maxbuf_sz, buf_sz, biotmap_sz; /* * physmem_est is in pages. Convert it to kilobytes (assumes @@ -554,6 +566,52 @@ kern_vfs_bio_buffer_alloc(caddr_t v, long physmem_est) } /* + * Ideal allocation size for the transient bio submap if 10% + * of the maximal space buffer map. This roughly corresponds + * to the amount of the buffer mapped for typical UFS load. + * + * Clip the buffer map to reserve space for the transient + * BIOs, if its extent is bigger than 90% of the maximum + * buffer map extent on the platform. + * + * The fall-back to the maxbuf in case of maxbcache unset, + * allows to not trim the buffer KVA for the architectures + * with ample KVA space. + */ + if (bio_transient_maxcnt == 0) { + maxbuf_sz = maxbcache != 0 ? maxbcache : maxbuf * BKVASIZE; + buf_sz = nbuf * BKVASIZE; + if (buf_sz < maxbuf_sz / 10 * 9) { + /* + * There is more KVA than memory. Do not + * adjust buffer map size, and assign the rest + * of maxbuf to transient map. + */ + biotmap_sz = maxbuf_sz - buf_sz; + } else { + /* + * Buffer map spans all KVA we could afford on + * this platform. Give 10% of the buffer map + * to the transient bio map. + */ + biotmap_sz = buf_sz / 10; + buf_sz -= biotmap_sz; + } + if (biotmap_sz / INT_MAX > MAXPHYS) + bio_transient_maxcnt = INT_MAX; + else + bio_transient_maxcnt = biotmap_sz / MAXPHYS; + /* + * Artifically limit to 1024 simultaneous in-flight I/Os + * using the transient mapping. + */ + if (bio_transient_maxcnt > 1024) + bio_transient_maxcnt = 1024; + if (tuned_nbuf) + nbuf = buf_sz / BKVASIZE; + } + + /* * swbufs are used as temporary holders for I/O, such as paging I/O. * We have no less then 16 and no more then 256. */ @@ -606,6 +664,9 @@ bufinit(void) LIST_INIT(&bp->b_dep); BUF_LOCKINIT(bp); TAILQ_INSERT_TAIL(&bufqueues[QUEUE_EMPTY], bp, b_freelist); +#ifdef INVARIANTS + bq_len[QUEUE_EMPTY]++; +#endif } /* @@ -674,6 +735,55 @@ bufinit(void) bogus_page = vm_page_alloc(NULL, 0, VM_ALLOC_NOOBJ | VM_ALLOC_NORMAL | VM_ALLOC_WIRED); + unmapped_buf = (caddr_t)kmem_alloc_nofault(kernel_map, MAXPHYS); +} + +#ifdef INVARIANTS +static inline void +vfs_buf_check_mapped(struct buf *bp) +{ + + KASSERT((bp->b_flags & B_UNMAPPED) == 0, + ("mapped buf %p %x", bp, bp->b_flags)); + KASSERT(bp->b_kvabase != unmapped_buf, + ("mapped buf: b_kvabase was not updated %p", bp)); + KASSERT(bp->b_data != unmapped_buf, + ("mapped buf: b_data was not updated %p", bp)); +} + +static inline void +vfs_buf_check_unmapped(struct buf *bp) +{ + + KASSERT((bp->b_flags & B_UNMAPPED) == B_UNMAPPED, + ("unmapped buf %p %x", bp, bp->b_flags)); + KASSERT(bp->b_kvabase == unmapped_buf, + ("unmapped buf: corrupted b_kvabase %p", bp)); + KASSERT(bp->b_data == unmapped_buf, + ("unmapped buf: corrupted b_data %p", bp)); +} + +#define BUF_CHECK_MAPPED(bp) vfs_buf_check_mapped(bp) +#define BUF_CHECK_UNMAPPED(bp) vfs_buf_check_unmapped(bp) +#else +#define BUF_CHECK_MAPPED(bp) do {} while (0) +#define BUF_CHECK_UNMAPPED(bp) do {} while (0) +#endif + +static void +bpmap_qenter(struct buf *bp) +{ + + BUF_CHECK_MAPPED(bp); + + /* + * bp->b_data is relative to bp->b_offset, but + * bp->b_offset may be offset into the first page. + */ + bp->b_data = (caddr_t)trunc_page((vm_offset_t)bp->b_data); + pmap_qenter((vm_offset_t)bp->b_data, bp->b_pages, bp->b_npages); + bp->b_data = (caddr_t)((vm_offset_t)bp->b_data | + (vm_offset_t)(bp->b_offset & PAGE_MASK)); } /* @@ -685,14 +795,26 @@ static void bfreekva(struct buf *bp) { - if (bp->b_kvasize) { - atomic_add_int(&buffreekvacnt, 1); - atomic_subtract_long(&bufspace, bp->b_kvasize); - vm_map_remove(buffer_map, (vm_offset_t) bp->b_kvabase, - (vm_offset_t) bp->b_kvabase + bp->b_kvasize); - bp->b_kvasize = 0; - bufspacewakeup(); + if (bp->b_kvasize == 0) + return; + + atomic_add_int(&buffreekvacnt, 1); + atomic_subtract_long(&bufspace, bp->b_kvasize); + if ((bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); + vm_map_remove(buffer_map, (vm_offset_t)bp->b_kvabase, + (vm_offset_t)bp->b_kvabase + bp->b_kvasize); + } else { + BUF_CHECK_UNMAPPED(bp); + if ((bp->b_flags & B_KVAALLOC) != 0) { + vm_map_remove(buffer_map, (vm_offset_t)bp->b_kvaalloc, + (vm_offset_t)bp->b_kvaalloc + bp->b_kvasize); + } + atomic_subtract_long(&unmapped_bufspace, bp->b_kvasize); + bp->b_flags &= ~(B_UNMAPPED | B_KVAALLOC); } + bp->b_kvasize = 0; + bufspacewakeup(); } /* @@ -759,6 +881,11 @@ bremfreel(struct buf *bp) mtx_assert(&bqlock, MA_OWNED); TAILQ_REMOVE(&bufqueues[bp->b_qindex], bp, b_freelist); +#ifdef INVARIANTS + KASSERT(bq_len[bp->b_qindex] >= 1, ("queue %d underflow", + bp->b_qindex)); + bq_len[bp->b_qindex]--; +#endif bp->b_qindex = QUEUE_NONE; /* * If this was a delayed bremfree() we only need to remove the buffer @@ -829,9 +956,8 @@ breada(struct vnode * vp, daddr_t * rablkno, int * rabsize, * getblk(). Also starts asynchronous I/O on read-ahead blocks. */ int -breadn_flags(struct vnode * vp, daddr_t blkno, int size, - daddr_t * rablkno, int *rabsize, int cnt, - struct ucred * cred, int flags, struct buf **bpp) +breadn_flags(struct vnode *vp, daddr_t blkno, int size, daddr_t *rablkno, + int *rabsize, int cnt, struct ucred *cred, int flags, struct buf **bpp) { struct buf *bp; int rv = 0, readwait = 0; @@ -1405,7 +1531,8 @@ brelse(struct buf *bp) } } - if ((bp->b_flags & B_INVAL) == 0) { + if ((bp->b_flags & (B_INVAL | B_UNMAPPED)) == 0) { + BUF_CHECK_MAPPED(bp); pmap_qenter( trunc_page((vm_offset_t)bp->b_data), bp->b_pages, bp->b_npages); @@ -1506,11 +1633,17 @@ brelse(struct buf *bp) bp->b_qindex = QUEUE_DIRTY; else bp->b_qindex = QUEUE_CLEAN; - if (bp->b_flags & B_AGE) - TAILQ_INSERT_HEAD(&bufqueues[bp->b_qindex], bp, b_freelist); - else - TAILQ_INSERT_TAIL(&bufqueues[bp->b_qindex], bp, b_freelist); + if (bp->b_flags & B_AGE) { + TAILQ_INSERT_HEAD(&bufqueues[bp->b_qindex], bp, + b_freelist); + } else { + TAILQ_INSERT_TAIL(&bufqueues[bp->b_qindex], bp, + b_freelist); + } } +#ifdef INVARIANTS + bq_len[bp->b_qindex]++; +#endif mtx_unlock(&bqlock); /* @@ -1601,6 +1734,9 @@ bqrelse(struct buf *bp) if (bp->b_flags & B_DELWRI) { bp->b_qindex = QUEUE_DIRTY; TAILQ_INSERT_TAIL(&bufqueues[bp->b_qindex], bp, b_freelist); +#ifdef INVARIANTS + bq_len[bp->b_qindex]++; +#endif } else { /* * The locking of the BO_LOCK for checking of the @@ -1613,6 +1749,9 @@ bqrelse(struct buf *bp) bp->b_qindex = QUEUE_CLEAN; TAILQ_INSERT_TAIL(&bufqueues[QUEUE_CLEAN], bp, b_freelist); +#ifdef INVARIANTS + bq_len[QUEUE_CLEAN]++; +#endif } else { /* * We are too low on memory, we have to try to free @@ -1654,7 +1793,11 @@ vfs_vmio_release(struct buf *bp) int i; vm_page_t m; - pmap_qremove(trunc_page((vm_offset_t)bp->b_data), bp->b_npages); + if ((bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); + pmap_qremove(trunc_page((vm_offset_t)bp->b_data), bp->b_npages); + } else + BUF_CHECK_UNMAPPED(bp); VM_OBJECT_LOCK(bp->b_bufobj->bo_object); for (i = 0; i < bp->b_npages; i++) { m = bp->b_pages[i]; @@ -1758,8 +1901,10 @@ vfs_bio_awrite(struct buf *bp) int nwritten; int size; int maxcl; + int gbflags; bo = &vp->v_bufobj; + gbflags = (bp->b_flags & B_UNMAPPED) != 0 ? GB_UNMAPPED : 0; /* * right now we support clustered writing only to regular files. If * we find a clusterable block we could be in the middle of a cluster @@ -1790,8 +1935,9 @@ vfs_bio_awrite(struct buf *bp) */ if (ncl != 1) { BUF_UNLOCK(bp); - nwritten = cluster_wbuild(vp, size, lblkno - j, ncl); - return nwritten; + nwritten = cluster_wbuild(vp, size, lblkno - j, ncl, + gbflags); + return (nwritten); } } bremfree(bp); @@ -1807,46 +1953,206 @@ vfs_bio_awrite(struct buf *bp) return nwritten; } +static void +setbufkva(struct buf *bp, vm_offset_t addr, int maxsize, int gbflags) +{ + + KASSERT((bp->b_flags & (B_UNMAPPED | B_KVAALLOC)) == 0 && + bp->b_kvasize == 0, ("call bfreekva(%p)", bp)); + if ((gbflags & GB_UNMAPPED) == 0) { + bp->b_kvabase = (caddr_t)addr; + } else if ((gbflags & GB_KVAALLOC) != 0) { + KASSERT((gbflags & GB_UNMAPPED) != 0, + ("GB_KVAALLOC without GB_UNMAPPED")); + bp->b_kvaalloc = (caddr_t)addr; + bp->b_flags |= B_KVAALLOC; + atomic_add_long(&unmapped_bufspace, bp->b_kvasize); + } + bp->b_kvasize = maxsize; +} + /* - * getnewbuf: - * - * Find and initialize a new buffer header, freeing up existing buffers - * in the bufqueues as necessary. The new buffer is returned locked. - * - * Important: B_INVAL is not set. If the caller wishes to throw the - * buffer away, the caller must set B_INVAL prior to calling brelse(). - * - * We block if: - * We have insufficient buffer headers - * We have insufficient buffer space - * buffer_map is too fragmented ( space reservation fails ) - * If we have to flush dirty buffers ( but we try to avoid this ) - * - * To avoid VFS layer recursion we do not flush dirty buffers ourselves. - * Instead we ask the buf daemon to do it for us. We attempt to - * avoid piecemeal wakeups of the pageout daemon. + * Allocate the buffer KVA and set b_kvasize. Also set b_kvabase if + * needed. */ +static int +allocbufkva(struct buf *bp, int maxsize, int gbflags) +{ + vm_offset_t addr; + int rv; -static struct buf * -getnewbuf(struct vnode *vp, int slpflag, int slptimeo, int size, int maxsize, - int gbflags) + bfreekva(bp); + addr = 0; + + vm_map_lock(buffer_map); + if (vm_map_findspace(buffer_map, vm_map_min(buffer_map), maxsize, + &addr)) { + vm_map_unlock(buffer_map); + /* + * Buffer map is too fragmented. Request the caller + * to defragment the map. + */ + atomic_add_int(&bufdefragcnt, 1); + return (1); + } + rv = vm_map_insert(buffer_map, NULL, 0, addr, addr + maxsize, + VM_PROT_RW, VM_PROT_RW, MAP_NOFAULT); + KASSERT(rv == KERN_SUCCESS, ("vm_map_insert(buffer_map) rv %d", rv)); + vm_map_unlock(buffer_map); + setbufkva(bp, addr, maxsize, gbflags); + atomic_add_long(&bufspace, bp->b_kvasize); + return (0); +} + +/* + * Ask the bufdaemon for help, or act as bufdaemon itself, when a + * locked vnode is supplied. + */ +static void +getnewbuf_bufd_help(struct vnode *vp, int gbflags, int slpflag, int slptimeo, + int defrag) { struct thread *td; - struct buf *bp; - struct buf *nbp; - int defrag = 0; - int nqindex; - static int flushingbufs; + char *waitmsg; + int fl, flags, norunbuf; + + mtx_assert(&bqlock, MA_OWNED); + + if (defrag) { + flags = VFS_BIO_NEED_BUFSPACE; + waitmsg = "nbufkv"; + } else if (bufspace >= hibufspace) { + waitmsg = "nbufbs"; + flags = VFS_BIO_NEED_BUFSPACE; + } else { + waitmsg = "newbuf"; + flags = VFS_BIO_NEED_ANY; + } + mtx_lock(&nblock); + needsbuffer |= flags; + mtx_unlock(&nblock); + mtx_unlock(&bqlock); + + bd_speedup(); /* heeeelp */ + if ((gbflags & GB_NOWAIT_BD) != 0) + return; td = curthread; + mtx_lock(&nblock); + while (needsbuffer & flags) { + if (vp != NULL && (td->td_pflags & TDP_BUFNEED) == 0) { + mtx_unlock(&nblock); + /* + * getblk() is called with a vnode locked, and + * some majority of the dirty buffers may as + * well belong to the vnode. Flushing the + * buffers there would make a progress that + * cannot be achieved by the buf_daemon, that + * cannot lock the vnode. + */ + norunbuf = ~(TDP_BUFNEED | TDP_NORUNNINGBUF) | + (td->td_pflags & TDP_NORUNNINGBUF); + /* play bufdaemon */ + td->td_pflags |= TDP_BUFNEED | TDP_NORUNNINGBUF; + fl = buf_do_flush(vp); + td->td_pflags &= norunbuf; + mtx_lock(&nblock); + if (fl != 0) + continue; + if ((needsbuffer & flags) == 0) + break; + } + if (msleep(&needsbuffer, &nblock, (PRIBIO + 4) | slpflag, + waitmsg, slptimeo)) + break; + } + mtx_unlock(&nblock); +} + +static void +getnewbuf_reuse_bp(struct buf *bp, int qindex) +{ + + CTR6(KTR_BUF, "getnewbuf(%p) vp %p flags %X kvasize %d bufsize %d " + "queue %d (recycling)", bp, bp->b_vp, bp->b_flags, + bp->b_kvasize, bp->b_bufsize, qindex); + mtx_assert(&bqlock, MA_NOTOWNED); + /* - * We can't afford to block since we might be holding a vnode lock, - * which may prevent system daemons from running. We deal with - * low-memory situations by proactively returning memory and running - * async I/O rather then sync I/O. + * Note: we no longer distinguish between VMIO and non-VMIO + * buffers. */ - atomic_add_int(&getnewbufcalls, 1); - atomic_subtract_int(&getnewbufrestarts, 1); + KASSERT((bp->b_flags & B_DELWRI) == 0, + ("delwri buffer %p found in queue %d", bp, qindex)); + + if (qindex == QUEUE_CLEAN) { + if (bp->b_flags & B_VMIO) { + bp->b_flags &= ~B_ASYNC; + vfs_vmio_release(bp); + } + if (bp->b_vp != NULL) + brelvp(bp); + } + + /* + * Get the rest of the buffer freed up. b_kva* is still valid + * after this operation. + */ + + if (bp->b_rcred != NOCRED) { + crfree(bp->b_rcred); + bp->b_rcred = NOCRED; + } + if (bp->b_wcred != NOCRED) { + crfree(bp->b_wcred); + bp->b_wcred = NOCRED; + } + if (!LIST_EMPTY(&bp->b_dep)) + buf_deallocate(bp); + if (bp->b_vflags & BV_BKGRDINPROG) + panic("losing buffer 3"); + KASSERT(bp->b_vp == NULL, ("bp: %p still has vnode %p. qindex: %d", + bp, bp->b_vp, qindex)); + KASSERT((bp->b_xflags & (BX_VNCLEAN|BX_VNDIRTY)) == 0, + ("bp: %p still on a buffer list. xflags %X", bp, bp->b_xflags)); + + if (bp->b_bufsize) + allocbuf(bp, 0); + + bp->b_flags &= B_UNMAPPED | B_KVAALLOC; + bp->b_ioflags = 0; + bp->b_xflags = 0; + KASSERT((bp->b_vflags & BV_INFREECNT) == 0, + ("buf %p still counted as free?", bp)); + bp->b_vflags = 0; + bp->b_vp = NULL; + bp->b_blkno = bp->b_lblkno = 0; + bp->b_offset = NOOFFSET; + bp->b_iodone = 0; + bp->b_error = 0; + bp->b_resid = 0; + bp->b_bcount = 0; + bp->b_npages = 0; + bp->b_dirtyoff = bp->b_dirtyend = 0; + bp->b_bufobj = NULL; + bp->b_pin_count = 0; + bp->b_fsprivate1 = NULL; + bp->b_fsprivate2 = NULL; + bp->b_fsprivate3 = NULL; + + LIST_INIT(&bp->b_dep); +} + +static int flushingbufs; + +static struct buf * +getnewbuf_scan(int maxsize, int defrag, int unmapped) +{ + struct buf *bp, *nbp; + int nqindex, qindex; + + KASSERT(!unmapped || !defrag, ("both unmapped and defrag")); + restart: atomic_add_int(&getnewbufrestarts, 1); @@ -1856,15 +2162,22 @@ restart: * that if we are specially marked process, we are allowed to * dip into our reserves. * - * The scanning sequence is nominally: EMPTY->EMPTYKVA->CLEAN + * The scanning sequence is nominally: EMPTY->EMPTYKVA->CLEAN + * for the allocation of the mapped buffer. For unmapped, the + * easiest is to start with EMPTY outright. * * We start with EMPTYKVA. If the list is empty we backup to EMPTY. * However, there are a number of cases (defragging, reusing, ...) * where we cannot backup. */ mtx_lock(&bqlock); - nqindex = QUEUE_EMPTYKVA; - nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTYKVA]); + if (!defrag && unmapped) { + nqindex = QUEUE_EMPTY; + nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTY]); + } else { + nqindex = QUEUE_EMPTYKVA; + nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTYKVA]); + } if (nbp == NULL) { /* @@ -1883,36 +2196,47 @@ restart: * CLEAN buffer, check to see if it is ok to use an EMPTY * buffer. We can only use an EMPTY buffer if allocating * its KVA would not otherwise run us out of buffer space. + * No KVA is needed for the unmapped allocation. */ if (nbp == NULL && defrag == 0 && bufspace + maxsize < hibufspace) { nqindex = QUEUE_EMPTY; nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTY]); } + + /* + * All available buffers might be clean, retry + * ignoring the lobufspace as the last resort. + */ + if (nbp == NULL) { + nqindex = QUEUE_CLEAN; + nbp = TAILQ_FIRST(&bufqueues[QUEUE_CLEAN]); + } } /* * Run scan, possibly freeing data and/or kva mappings on the fly * depending. */ - while ((bp = nbp) != NULL) { - int qindex = nqindex; + qindex = nqindex; /* - * Calculate next bp ( we can only use it if we do not block - * or do other fancy things ). + * Calculate next bp (we can only use it if we do not + * block or do other fancy things). */ if ((nbp = TAILQ_NEXT(bp, b_freelist)) == NULL) { switch(qindex) { case QUEUE_EMPTY: nqindex = QUEUE_EMPTYKVA; - if ((nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTYKVA]))) + nbp = TAILQ_FIRST(&bufqueues[QUEUE_EMPTYKVA]); + if (nbp != NULL) break; /* FALLTHROUGH */ case QUEUE_EMPTYKVA: nqindex = QUEUE_CLEAN; - if ((nbp = TAILQ_FIRST(&bufqueues[QUEUE_CLEAN]))) + nbp = TAILQ_FIRST(&bufqueues[QUEUE_CLEAN]); + if (nbp != NULL) break; /* FALLTHROUGH */ case QUEUE_CLEAN: @@ -1948,22 +2272,9 @@ restart: } BO_UNLOCK(bp->b_bufobj); } - CTR6(KTR_BUF, - "getnewbuf(%p) vp %p flags %X kvasize %d bufsize %d " - "queue %d (recycling)", bp, bp->b_vp, bp->b_flags, - bp->b_kvasize, bp->b_bufsize, qindex); - /* - * Sanity Checks - */ - KASSERT(bp->b_qindex == qindex, ("getnewbuf: inconsistant queue %d bp %p", qindex, bp)); - - /* - * Note: we no longer distinguish between VMIO and non-VMIO - * buffers. - */ - - KASSERT((bp->b_flags & B_DELWRI) == 0, ("delwri buffer %p found in queue %d", bp, qindex)); + KASSERT(bp->b_qindex == qindex, + ("getnewbuf: inconsistent queue %d bp %p", qindex, bp)); if (bp->b_bufobj != NULL) BO_LOCK(bp->b_bufobj); @@ -1971,68 +2282,13 @@ restart: if (bp->b_bufobj != NULL) BO_UNLOCK(bp->b_bufobj); mtx_unlock(&bqlock); - - if (qindex == QUEUE_CLEAN) { - if (bp->b_flags & B_VMIO) { - bp->b_flags &= ~B_ASYNC; - vfs_vmio_release(bp); - } - if (bp->b_vp) - brelvp(bp); - } - /* * NOTE: nbp is now entirely invalid. We can only restart * the scan from this point on. - * - * Get the rest of the buffer freed up. b_kva* is still - * valid after this operation. */ - if (bp->b_rcred != NOCRED) { - crfree(bp->b_rcred); - bp->b_rcred = NOCRED; - } - if (bp->b_wcred != NOCRED) { - crfree(bp->b_wcred); - bp->b_wcred = NOCRED; - } - if (!LIST_EMPTY(&bp->b_dep)) - buf_deallocate(bp); - if (bp->b_vflags & BV_BKGRDINPROG) - panic("losing buffer 3"); - KASSERT(bp->b_vp == NULL, - ("bp: %p still has vnode %p. qindex: %d", - bp, bp->b_vp, qindex)); - KASSERT((bp->b_xflags & (BX_VNCLEAN|BX_VNDIRTY)) == 0, - ("bp: %p still on a buffer list. xflags %X", - bp, bp->b_xflags)); - - if (bp->b_bufsize) - allocbuf(bp, 0); - - bp->b_flags = 0; - bp->b_ioflags = 0; - bp->b_xflags = 0; - KASSERT((bp->b_vflags & BV_INFREECNT) == 0, - ("buf %p still counted as free?", bp)); - bp->b_vflags = 0; - bp->b_vp = NULL; - bp->b_blkno = bp->b_lblkno = 0; - bp->b_offset = NOOFFSET; - bp->b_iodone = 0; - bp->b_error = 0; - bp->b_resid = 0; - bp->b_bcount = 0; - bp->b_npages = 0; - bp->b_dirtyoff = bp->b_dirtyend = 0; - bp->b_bufobj = NULL; - bp->b_pin_count = 0; - bp->b_fsprivate1 = NULL; - bp->b_fsprivate2 = NULL; - bp->b_fsprivate3 = NULL; - - LIST_INIT(&bp->b_dep); + getnewbuf_reuse_bp(bp, qindex); + mtx_assert(&bqlock, MA_NOTOWNED); /* * If we are defragging then free the buffer. @@ -2073,6 +2329,52 @@ restart: flushingbufs = 0; break; } + return (bp); +} + +/* + * getnewbuf: + * + * Find and initialize a new buffer header, freeing up existing buffers + * in the bufqueues as necessary. The new buffer is returned locked. + * + * Important: B_INVAL is not set. If the caller wishes to throw the + * buffer away, the caller must set B_INVAL prior to calling brelse(). + * + * We block if: + * We have insufficient buffer headers + * We have insufficient buffer space + * buffer_map is too fragmented ( space reservation fails ) + * If we have to flush dirty buffers ( but we try to avoid this ) + * + * To avoid VFS layer recursion we do not flush dirty buffers ourselves. + * Instead we ask the buf daemon to do it for us. We attempt to + * avoid piecemeal wakeups of the pageout daemon. + */ +static struct buf * +getnewbuf(struct vnode *vp, int slpflag, int slptimeo, int size, int maxsize, + int gbflags) +{ + struct buf *bp; + int defrag; + + KASSERT((gbflags & (GB_UNMAPPED | GB_KVAALLOC)) != GB_KVAALLOC, + ("GB_KVAALLOC only makes sense with GB_UNMAPPED")); + + defrag = 0; + /* + * We can't afford to block since we might be holding a vnode lock, + * which may prevent system daemons from running. We deal with + * low-memory situations by proactively returning memory and running + * async I/O rather then sync I/O. + */ + atomic_add_int(&getnewbufcalls, 1); + atomic_subtract_int(&getnewbufrestarts, 1); +restart: + bp = getnewbuf_scan(maxsize, defrag, (gbflags & (GB_UNMAPPED | + GB_KVAALLOC)) == GB_UNMAPPED); + if (bp != NULL) + defrag = 0; /* * If we exhausted our list, sleep as appropriate. We may have to @@ -2080,65 +2382,23 @@ restart: * * Generally we are sleeping due to insufficient buffer space. */ - if (bp == NULL) { - int flags, norunbuf; - char *waitmsg; - int fl; - - if (defrag) { - flags = VFS_BIO_NEED_BUFSPACE; - waitmsg = "nbufkv"; - } else if (bufspace >= hibufspace) { - waitmsg = "nbufbs"; - flags = VFS_BIO_NEED_BUFSPACE; - } else { - waitmsg = "newbuf"; - flags = VFS_BIO_NEED_ANY; - } - mtx_lock(&nblock); - needsbuffer |= flags; - mtx_unlock(&nblock); - mtx_unlock(&bqlock); - - bd_speedup(); /* heeeelp */ - if (gbflags & GB_NOWAIT_BD) - return (NULL); - - mtx_lock(&nblock); - while (needsbuffer & flags) { - if (vp != NULL && (td->td_pflags & TDP_BUFNEED) == 0) { - mtx_unlock(&nblock); - /* - * getblk() is called with a vnode - * locked, and some majority of the - * dirty buffers may as well belong to - * the vnode. Flushing the buffers - * there would make a progress that - * cannot be achieved by the - * buf_daemon, that cannot lock the - * vnode. - */ - norunbuf = ~(TDP_BUFNEED | TDP_NORUNNINGBUF) | - (td->td_pflags & TDP_NORUNNINGBUF); - /* play bufdaemon */ - td->td_pflags |= TDP_BUFNEED | TDP_NORUNNINGBUF; - fl = buf_do_flush(vp); - td->td_pflags &= norunbuf; - mtx_lock(&nblock); - if (fl != 0) - continue; - if ((needsbuffer & flags) == 0) - break; - } - if (msleep(&needsbuffer, &nblock, - (PRIBIO + 4) | slpflag, waitmsg, slptimeo)) { - mtx_unlock(&nblock); - return (NULL); - } - } - mtx_unlock(&nblock); + mtx_assert(&bqlock, MA_OWNED); + getnewbuf_bufd_help(vp, gbflags, slpflag, slptimeo, defrag); + mtx_assert(&bqlock, MA_NOTOWNED); + } else if ((gbflags & (GB_UNMAPPED | GB_KVAALLOC)) == GB_UNMAPPED) { + mtx_assert(&bqlock, MA_NOTOWNED); + + bfreekva(bp); + bp->b_flags |= B_UNMAPPED; + bp->b_kvabase = bp->b_data = unmapped_buf; + bp->b_kvasize = maxsize; + atomic_add_long(&bufspace, bp->b_kvasize); + atomic_add_long(&unmapped_bufspace, bp->b_kvasize); + atomic_add_int(&bufreusecnt, 1); } else { + mtx_assert(&bqlock, MA_NOTOWNED); + /* * We finally have a valid bp. We aren't quite out of the * woods, we still have to reserve kva space. In order @@ -2147,39 +2407,29 @@ restart: */ maxsize = (maxsize + BKVAMASK) & ~BKVAMASK; - if (maxsize != bp->b_kvasize) { - vm_offset_t addr = 0; - int rv; - - bfreekva(bp); - - vm_map_lock(buffer_map); - if (vm_map_findspace(buffer_map, - vm_map_min(buffer_map), maxsize, &addr)) { - /* - * Buffer map is too fragmented. - * We must defragment the map. - */ - atomic_add_int(&bufdefragcnt, 1); - vm_map_unlock(buffer_map); + if (maxsize != bp->b_kvasize || (bp->b_flags & (B_UNMAPPED | + B_KVAALLOC)) == B_UNMAPPED) { + if (allocbufkva(bp, maxsize, gbflags)) { defrag = 1; bp->b_flags |= B_INVAL; brelse(bp); goto restart; } - rv = vm_map_insert(buffer_map, NULL, 0, addr, - addr + maxsize, VM_PROT_ALL, VM_PROT_ALL, - MAP_NOFAULT); - KASSERT(rv == KERN_SUCCESS, - ("vm_map_insert(buffer_map) rv %d", rv)); - vm_map_unlock(buffer_map); - bp->b_kvabase = (caddr_t)addr; - bp->b_kvasize = maxsize; - atomic_add_long(&bufspace, bp->b_kvasize); atomic_add_int(&bufreusecnt, 1); + } else if ((bp->b_flags & B_KVAALLOC) != 0 && + (gbflags & (GB_UNMAPPED | GB_KVAALLOC)) == 0) { + bp->b_kvabase = bp->b_kvaalloc; + bp->b_flags &= ~B_KVAALLOC; + atomic_subtract_long(&unmapped_bufspace, + bp->b_kvasize); + atomic_add_int(&bufreusecnt, 1); + } + if ((gbflags & GB_UNMAPPED) == 0) { + bp->b_saveaddr = bp->b_kvabase; + bp->b_data = bp->b_saveaddr; + bp->b_flags &= ~B_UNMAPPED; + BUF_CHECK_MAPPED(bp); } - bp->b_saveaddr = bp->b_kvabase; - bp->b_data = bp->b_saveaddr; } return (bp); } @@ -2590,6 +2840,90 @@ vfs_setdirty_locked_object(struct buf *bp) } /* + * Allocate the KVA mapping for an existing buffer. It handles the + * cases of both B_UNMAPPED buffer, and buffer with the preallocated + * KVA which is not mapped (B_KVAALLOC). + */ +static void +bp_unmapped_get_kva(struct buf *bp, daddr_t blkno, int size, int gbflags) +{ + struct buf *scratch_bp; + int bsize, maxsize, need_mapping, need_kva; + off_t offset; + + need_mapping = (bp->b_flags & B_UNMAPPED) != 0 && + (gbflags & GB_UNMAPPED) == 0; + need_kva = (bp->b_flags & (B_KVAALLOC | B_UNMAPPED)) == B_UNMAPPED && + (gbflags & GB_KVAALLOC) != 0; + if (!need_mapping && !need_kva) + return; + + BUF_CHECK_UNMAPPED(bp); + + if (need_mapping && (bp->b_flags & B_KVAALLOC) != 0) { + /* + * Buffer is not mapped, but the KVA was already + * reserved at the time of the instantiation. Use the + * allocated space. + */ + bp->b_flags &= ~B_KVAALLOC; + KASSERT(bp->b_kvaalloc != 0, ("kvaalloc == 0")); + bp->b_kvabase = bp->b_kvaalloc; + atomic_subtract_long(&unmapped_bufspace, bp->b_kvasize); + goto has_addr; + } + + /* + * Calculate the amount of the address space we would reserve + * if the buffer was mapped. + */ + bsize = vn_isdisk(bp->b_vp, NULL) ? DEV_BSIZE : bp->b_bufobj->bo_bsize; + offset = blkno * bsize; + maxsize = size + (offset & PAGE_MASK); + maxsize = imax(maxsize, bsize); + +mapping_loop: + if (allocbufkva(bp, maxsize, gbflags)) { + /* + * Request defragmentation. getnewbuf() returns us the + * allocated space by the scratch buffer KVA. + */ + scratch_bp = getnewbuf(bp->b_vp, 0, 0, size, maxsize, gbflags | + (GB_UNMAPPED | GB_KVAALLOC)); + if (scratch_bp == NULL) { + if ((gbflags & GB_NOWAIT_BD) != 0) { + /* + * XXXKIB: defragmentation cannot + * succeed, not sure what else to do. + */ + panic("GB_NOWAIT_BD and B_UNMAPPED %p", bp); + } + atomic_add_int(&mappingrestarts, 1); + goto mapping_loop; + } + KASSERT((scratch_bp->b_flags & B_KVAALLOC) != 0, + ("scratch bp !B_KVAALLOC %p", scratch_bp)); + setbufkva(bp, (vm_offset_t)scratch_bp->b_kvaalloc, + scratch_bp->b_kvasize, gbflags); + + /* Get rid of the scratch buffer. */ + scratch_bp->b_kvasize = 0; + scratch_bp->b_flags |= B_INVAL | B_UNMAPPED; + scratch_bp->b_flags &= ~B_KVAALLOC; + brelse(scratch_bp); + } + if (!need_mapping) + return; + +has_addr: + bp->b_saveaddr = bp->b_kvabase; + bp->b_data = bp->b_saveaddr; /* b_offset is handled by bpmap_qenter */ + bp->b_flags &= ~B_UNMAPPED; + BUF_CHECK_MAPPED(bp); + bpmap_qenter(bp); +} + +/* * getblk: * * Get a block given a specified block and offset into a file/device. @@ -2626,14 +2960,17 @@ vfs_setdirty_locked_object(struct buf *bp) * prior to issuing the READ. biodone() will *not* clear B_INVAL. */ struct buf * -getblk(struct vnode * vp, daddr_t blkno, int size, int slpflag, int slptimeo, +getblk(struct vnode *vp, daddr_t blkno, int size, int slpflag, int slptimeo, int flags) { struct buf *bp; struct bufobj *bo; - int error; + int bsize, error, maxsize, vmio; + off_t offset; CTR3(KTR_BUF, "getblk(%p, %ld, %d)", vp, (long)blkno, size); + KASSERT((flags & (GB_UNMAPPED | GB_KVAALLOC)) != GB_KVAALLOC, + ("GB_KVAALLOC only makes sense with GB_UNMAPPED")); ASSERT_VOP_LOCKED(vp, "getblk"); if (size > MAXBSIZE) panic("getblk: size(%d) > MAXBSIZE(%d)\n", size, MAXBSIZE); @@ -2701,9 +3038,8 @@ loop: } /* - * check for size inconsistancies for non-VMIO case. + * check for size inconsistencies for non-VMIO case. */ - if (bp->b_bcount != size) { if ((bp->b_flags & B_VMIO) == 0 || (size > bp->b_kvasize)) { @@ -2737,12 +3073,18 @@ loop: } /* + * Handle the case of unmapped buffer which should + * become mapped, or the buffer for which KVA + * reservation is requested. + */ + bp_unmapped_get_kva(bp, blkno, size, flags); + + /* * If the size is inconsistant in the VMIO case, we can resize * the buffer. This might lead to B_CACHE getting set or * cleared. If the size has not changed, B_CACHE remains * unchanged from its previous state. */ - if (bp->b_bcount != size) allocbuf(bp, size); @@ -2783,9 +3125,6 @@ loop: } bp->b_flags &= ~B_DONE; } else { - int bsize, maxsize, vmio; - off_t offset; - /* * Buffer is not in-core, create new buffer. The buffer * returned by getnewbuf() is locked. Note that the returned @@ -2801,7 +3140,13 @@ loop: bsize = vn_isdisk(vp, NULL) ? DEV_BSIZE : bo->bo_bsize; offset = blkno * bsize; vmio = vp->v_object != NULL; - maxsize = vmio ? size + (offset & PAGE_MASK) : size; + if (vmio) { + maxsize = size + (offset & PAGE_MASK); + } else { + maxsize = size; + /* Do not allow non-VMIO notmapped buffers. */ + flags &= ~GB_UNMAPPED; + } maxsize = imax(maxsize, bsize); bp = getnewbuf(vp, slpflag, slptimeo, size, maxsize, flags); @@ -2857,6 +3202,7 @@ loop: KASSERT(bp->b_bufobj->bo_object == NULL, ("ARGH! has b_bufobj->bo_object %p %p\n", bp, bp->b_bufobj->bo_object)); + BUF_CHECK_MAPPED(bp); } allocbuf(bp, size); @@ -3031,10 +3377,14 @@ allocbuf(struct buf *bp, int size) if (desiredpages < bp->b_npages) { vm_page_t m; - pmap_qremove((vm_offset_t)trunc_page( - (vm_offset_t)bp->b_data) + - (desiredpages << PAGE_SHIFT), - (bp->b_npages - desiredpages)); + if ((bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); + pmap_qremove((vm_offset_t)trunc_page( + (vm_offset_t)bp->b_data) + + (desiredpages << PAGE_SHIFT), + (bp->b_npages - desiredpages)); + } else + BUF_CHECK_UNMAPPED(bp); VM_OBJECT_LOCK(bp->b_bufobj->bo_object); for (i = desiredpages; i < bp->b_npages; i++) { /* @@ -3140,21 +3490,12 @@ allocbuf(struct buf *bp, int size) VM_OBJECT_UNLOCK(obj); /* - * Step 3, fixup the KVM pmap. Remember that - * bp->b_data is relative to bp->b_offset, but - * bp->b_offset may be offset into the first page. + * Step 3, fixup the KVM pmap. */ - - bp->b_data = (caddr_t) - trunc_page((vm_offset_t)bp->b_data); - pmap_qenter( - (vm_offset_t)bp->b_data, - bp->b_pages, - bp->b_npages - ); - - bp->b_data = (caddr_t)((vm_offset_t)bp->b_data | - (vm_offset_t)(bp->b_offset & PAGE_MASK)); + if ((bp->b_flags & B_UNMAPPED) == 0) + bpmap_qenter(bp); + else + BUF_CHECK_UNMAPPED(bp); } } if (newbsize < bp->b_bufsize) @@ -3164,21 +3505,38 @@ allocbuf(struct buf *bp, int size) return 1; } +extern int inflight_transient_maps; + void biodone(struct bio *bp) { struct mtx *mtxp; void (*done)(struct bio *); + vm_offset_t start, end; + int transient; mtxp = mtx_pool_find(mtxpool_sleep, bp); mtx_lock(mtxp); bp->bio_flags |= BIO_DONE; + if ((bp->bio_flags & BIO_TRANSIENT_MAPPING) != 0) { + start = trunc_page((vm_offset_t)bp->bio_data); + end = round_page((vm_offset_t)bp->bio_data + bp->bio_length); + transient = 1; + } else { + transient = 0; + start = end = 0; + } done = bp->bio_done; if (done == NULL) wakeup(bp); mtx_unlock(mtxp); if (done != NULL) done(bp); + if (transient) { + pmap_qremove(start, OFF_TO_IDX(end - start)); + vm_map_remove(bio_transient_map, start, end); + atomic_add_int(&inflight_transient_maps, -1); + } } /* @@ -3281,7 +3639,7 @@ dev_strategy(struct cdev *dev, struct buf *bp) bip->bio_offset = bp->b_iooffset; bip->bio_length = bp->b_bcount; bip->bio_bcount = bp->b_bcount; /* XXX: remove */ - bip->bio_data = bp->b_data; + bdata2bio(bp, bip); bip->bio_done = bufdonebio; bip->bio_caller2 = bp; bip->bio_dev = dev; @@ -3435,9 +3793,11 @@ bufdone_finish(struct buf *bp) } vm_object_pip_wakeupn(obj, 0); VM_OBJECT_UNLOCK(obj); - if (bogus) + if (bogus && (bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); pmap_qenter(trunc_page((vm_offset_t)bp->b_data), bp->b_pages, bp->b_npages); + } } /* @@ -3480,8 +3840,12 @@ vfs_unbusy_pages(struct buf *bp) if (!m) panic("vfs_unbusy_pages: page missing\n"); bp->b_pages[i] = m; - pmap_qenter(trunc_page((vm_offset_t)bp->b_data), - bp->b_pages, bp->b_npages); + if ((bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); + pmap_qenter(trunc_page((vm_offset_t)bp->b_data), + bp->b_pages, bp->b_npages); + } else + BUF_CHECK_UNMAPPED(bp); } vm_object_pip_subtract(obj, 1); vm_page_io_finish(m); @@ -3646,9 +4010,11 @@ vfs_busy_pages(struct buf *bp, int clear_modify) foff = (foff + PAGE_SIZE) & ~(off_t)PAGE_MASK; } VM_OBJECT_UNLOCK(obj); - if (bogus) + if (bogus && (bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); pmap_qenter(trunc_page((vm_offset_t)bp->b_data), bp->b_pages, bp->b_npages); + } } /* @@ -3704,8 +4070,7 @@ vfs_bio_set_valid(struct buf *bp, int base, int size) void vfs_bio_clrbuf(struct buf *bp) { - int i, j, mask; - caddr_t sa, ea; + int i, j, mask, sa, ea, slide; if ((bp->b_flags & (B_VMIO | B_MALLOC)) != B_VMIO) { clrbuf(bp); @@ -3723,39 +4088,69 @@ vfs_bio_clrbuf(struct buf *bp) if ((bp->b_pages[0]->valid & mask) == mask) goto unlock; if ((bp->b_pages[0]->valid & mask) == 0) { - bzero(bp->b_data, bp->b_bufsize); + pmap_zero_page_area(bp->b_pages[0], 0, bp->b_bufsize); bp->b_pages[0]->valid |= mask; goto unlock; } } - ea = sa = bp->b_data; - for(i = 0; i < bp->b_npages; i++, sa = ea) { - ea = (caddr_t)trunc_page((vm_offset_t)sa + PAGE_SIZE); - ea = (caddr_t)(vm_offset_t)ulmin( - (u_long)(vm_offset_t)ea, - (u_long)(vm_offset_t)bp->b_data + bp->b_bufsize); + sa = bp->b_offset & PAGE_MASK; + slide = 0; + for (i = 0; i < bp->b_npages; i++) { + slide = imin(slide + PAGE_SIZE, bp->b_bufsize + sa); + ea = slide & PAGE_MASK; + if (ea == 0) + ea = PAGE_SIZE; if (bp->b_pages[i] == bogus_page) continue; - j = ((vm_offset_t)sa & PAGE_MASK) / DEV_BSIZE; + j = sa / DEV_BSIZE; mask = ((1 << ((ea - sa) / DEV_BSIZE)) - 1) << j; VM_OBJECT_LOCK_ASSERT(bp->b_pages[i]->object, MA_OWNED); if ((bp->b_pages[i]->valid & mask) == mask) continue; if ((bp->b_pages[i]->valid & mask) == 0) - bzero(sa, ea - sa); + pmap_zero_page_area(bp->b_pages[i], sa, ea - sa); else { for (; sa < ea; sa += DEV_BSIZE, j++) { - if ((bp->b_pages[i]->valid & (1 << j)) == 0) - bzero(sa, DEV_BSIZE); + if ((bp->b_pages[i]->valid & (1 << j)) == 0) { + pmap_zero_page_area(bp->b_pages[i], + sa, DEV_BSIZE); + } } } bp->b_pages[i]->valid |= mask; + sa = 0; } unlock: VM_OBJECT_UNLOCK(bp->b_bufobj->bo_object); bp->b_resid = 0; } +void +vfs_bio_bzero_buf(struct buf *bp, int base, int size) +{ + vm_page_t m; + int i, n; + + if ((bp->b_flags & B_UNMAPPED) == 0) { + BUF_CHECK_MAPPED(bp); + bzero(bp->b_data + base, size); + } else { + BUF_CHECK_UNMAPPED(bp); + n = PAGE_SIZE - (base & PAGE_MASK); + VM_OBJECT_LOCK(bp->b_bufobj->bo_object); + for (i = base / PAGE_SIZE; size > 0 && i < bp->b_npages; ++i) { + m = bp->b_pages[i]; + if (n > size) + n = size; + pmap_zero_page_area(m, base & PAGE_MASK, n); + base += n; + size -= n; + n = PAGE_SIZE; + } + VM_OBJECT_UNLOCK(bp->b_bufobj->bo_object); + } +} + /* * vm_hold_load_pages and vm_hold_free_pages get pages into * a buffers address space. The pages are anonymous and are @@ -3768,6 +4163,8 @@ vm_hold_load_pages(struct buf *bp, vm_offset_t from, vm_offset_t to) vm_page_t p; int index; + BUF_CHECK_MAPPED(bp); + to = round_page(to); from = round_page(from); index = (from - trunc_page((vm_offset_t)bp->b_data)) >> PAGE_SHIFT; @@ -3799,6 +4196,8 @@ vm_hold_free_pages(struct buf *bp, int newbsize) vm_page_t p; int index, newnpages; + BUF_CHECK_MAPPED(bp); + from = round_page((vm_offset_t)bp->b_data + newbsize); newnpages = (from - trunc_page((vm_offset_t)bp->b_data)) >> PAGE_SHIFT; if (bp->b_npages > newnpages) @@ -3829,7 +4228,7 @@ vm_hold_free_pages(struct buf *bp, int newbsize) * check the return value. */ int -vmapbuf(struct buf *bp) +vmapbuf(struct buf *bp, int mapbuf) { caddr_t kva; vm_prot_t prot; @@ -3844,12 +4243,19 @@ vmapbuf(struct buf *bp) (vm_offset_t)bp->b_data, bp->b_bufsize, prot, bp->b_pages, btoc(MAXPHYS))) < 0) return (-1); - pmap_qenter((vm_offset_t)bp->b_saveaddr, bp->b_pages, pidx); - - kva = bp->b_saveaddr; bp->b_npages = pidx; - bp->b_saveaddr = bp->b_data; - bp->b_data = kva + (((vm_offset_t) bp->b_data) & PAGE_MASK); + if (mapbuf) { + pmap_qenter((vm_offset_t)bp->b_saveaddr, bp->b_pages, pidx); + kva = bp->b_saveaddr; + bp->b_saveaddr = bp->b_data; + bp->b_data = kva + (((vm_offset_t)bp->b_data) & PAGE_MASK); + bp->b_flags &= ~B_UNMAPPED; + } else { + bp->b_flags |= B_UNMAPPED; + bp->b_offset = ((vm_offset_t)bp->b_data) & PAGE_MASK; + bp->b_saveaddr = bp->b_data; + bp->b_data = unmapped_buf; + } return(0); } @@ -3863,7 +4269,10 @@ vunmapbuf(struct buf *bp) int npages; npages = bp->b_npages; - pmap_qremove(trunc_page((vm_offset_t)bp->b_data), npages); + if (bp->b_flags & B_UNMAPPED) + bp->b_flags &= ~B_UNMAPPED; + else + pmap_qremove(trunc_page((vm_offset_t)bp->b_data), npages); vm_page_unhold_pages(bp->b_pages, npages); bp->b_data = bp->b_saveaddr; @@ -4000,6 +4409,29 @@ bunpin_wait(struct buf *bp) mtx_unlock(mtxp); } +/* + * Set bio_data or bio_ma for struct bio from the struct buf. + */ +void +bdata2bio(struct buf *bp, struct bio *bip) +{ + + if ((bp->b_flags & B_UNMAPPED) != 0) { + bip->bio_ma = bp->b_pages; + bip->bio_ma_n = bp->b_npages; + bip->bio_data = unmapped_buf; + bip->bio_ma_offset = (vm_offset_t)bp->b_offset & PAGE_MASK; + bip->bio_flags |= BIO_UNMAPPED; + KASSERT(round_page(bip->bio_ma_offset + bip->bio_length) / + PAGE_SIZE == bp->b_npages, + ("Buffer %p too short: %d %d %d", bp, bip->bio_ma_offset, + bip->bio_length, bip->bio_ma_n)); + } else { + bip->bio_data = bp->b_data; + bip->bio_ma = NULL; + } +} + #include "opt_ddb.h" #ifdef DDB #include diff --git a/sys/kern/vfs_cluster.c b/sys/kern/vfs_cluster.c index 663b66f..60338b2 100644 --- a/sys/kern/vfs_cluster.c +++ b/sys/kern/vfs_cluster.c @@ -60,11 +60,11 @@ SYSCTL_INT(_debug, OID_AUTO, rcluster, CTLFLAG_RW, &rcluster, 0, static MALLOC_DEFINE(M_SEGMENT, "cl_savebuf", "cluster_save buffer"); -static struct cluster_save * - cluster_collectbufs(struct vnode *vp, struct buf *last_bp); -static struct buf * - cluster_rbuild(struct vnode *vp, u_quad_t filesize, daddr_t lbn, - daddr_t blkno, long size, int run, struct buf *fbp); +static struct cluster_save *cluster_collectbufs(struct vnode *vp, + struct buf *last_bp, int gbflags); +static struct buf *cluster_rbuild(struct vnode *vp, u_quad_t filesize, + daddr_t lbn, daddr_t blkno, long size, int run, int gbflags, + struct buf *fbp); static void cluster_callback(struct buf *); static int write_behind = 1; @@ -83,15 +83,9 @@ extern vm_page_t bogus_page; * cluster_read replaces bread. */ int -cluster_read(vp, filesize, lblkno, size, cred, totread, seqcount, bpp) - struct vnode *vp; - u_quad_t filesize; - daddr_t lblkno; - long size; - struct ucred *cred; - long totread; - int seqcount; - struct buf **bpp; +cluster_read(struct vnode *vp, u_quad_t filesize, daddr_t lblkno, long size, + struct ucred *cred, long totread, int seqcount, int gbflags, + struct buf **bpp) { struct buf *bp, *rbp, *reqbp; struct bufobj *bo; @@ -117,7 +111,7 @@ cluster_read(vp, filesize, lblkno, size, cred, totread, seqcount, bpp) /* * get the requested block */ - *bpp = reqbp = bp = getblk(vp, lblkno, size, 0, 0, 0); + *bpp = reqbp = bp = getblk(vp, lblkno, size, 0, 0, gbflags); origblkno = lblkno; /* @@ -208,7 +202,7 @@ cluster_read(vp, filesize, lblkno, size, cred, totread, seqcount, bpp) if (ncontig < nblks) nblks = ncontig; bp = cluster_rbuild(vp, filesize, lblkno, - blkno, size, nblks, bp); + blkno, size, nblks, gbflags, bp); lblkno += (bp->b_bufsize / size); } else { bp->b_flags |= B_RAM; @@ -252,14 +246,14 @@ cluster_read(vp, filesize, lblkno, size, cred, totread, seqcount, bpp) if (ncontig) { ncontig = min(ncontig + 1, racluster); rbp = cluster_rbuild(vp, filesize, lblkno, blkno, - size, ncontig, NULL); + size, ncontig, gbflags, NULL); lblkno += (rbp->b_bufsize / size); if (rbp->b_flags & B_DELWRI) { bqrelse(rbp); continue; } } else { - rbp = getblk(vp, lblkno, size, 0, 0, 0); + rbp = getblk(vp, lblkno, size, 0, 0, gbflags); lblkno += 1; if (rbp->b_flags & B_DELWRI) { bqrelse(rbp); @@ -298,14 +292,8 @@ cluster_read(vp, filesize, lblkno, size, cred, totread, seqcount, bpp) * and then parcel them up into logical blocks in the buffer hash table. */ static struct buf * -cluster_rbuild(vp, filesize, lbn, blkno, size, run, fbp) - struct vnode *vp; - u_quad_t filesize; - daddr_t lbn; - daddr_t blkno; - long size; - int run; - struct buf *fbp; +cluster_rbuild(struct vnode *vp, u_quad_t filesize, daddr_t lbn, + daddr_t blkno, long size, int run, int gbflags, struct buf *fbp) { struct bufobj *bo; struct buf *bp, *tbp; @@ -329,7 +317,7 @@ cluster_rbuild(vp, filesize, lbn, blkno, size, run, fbp) tbp = fbp; tbp->b_iocmd = BIO_READ; } else { - tbp = getblk(vp, lbn, size, 0, 0, 0); + tbp = getblk(vp, lbn, size, 0, 0, gbflags); if (tbp->b_flags & B_CACHE) return tbp; tbp->b_flags |= B_ASYNC | B_RAM; @@ -350,9 +338,14 @@ cluster_rbuild(vp, filesize, lbn, blkno, size, run, fbp) * address may not be either. Inherit the b_data offset * from the original buffer. */ - bp->b_data = (char *)((vm_offset_t)bp->b_data | - ((vm_offset_t)tbp->b_data & PAGE_MASK)); bp->b_flags = B_ASYNC | B_CLUSTER | B_VMIO; + if ((gbflags & GB_UNMAPPED) != 0) { + bp->b_flags |= B_UNMAPPED; + bp->b_data = unmapped_buf; + } else { + bp->b_data = (char *)((vm_offset_t)bp->b_data | + ((vm_offset_t)tbp->b_data & PAGE_MASK)); + } bp->b_iocmd = BIO_READ; bp->b_iodone = cluster_callback; bp->b_blkno = blkno; @@ -376,7 +369,8 @@ cluster_rbuild(vp, filesize, lbn, blkno, size, run, fbp) break; } - tbp = getblk(vp, lbn + i, size, 0, 0, GB_LOCK_NOWAIT); + tbp = getblk(vp, lbn + i, size, 0, 0, GB_LOCK_NOWAIT | + (gbflags & GB_UNMAPPED)); /* Don't wait around for locked bufs. */ if (tbp == NULL) @@ -499,8 +493,10 @@ cluster_rbuild(vp, filesize, lbn, blkno, size, run, fbp) bp->b_bufsize, bp->b_kvasize); bp->b_kvasize = bp->b_bufsize; - pmap_qenter(trunc_page((vm_offset_t) bp->b_data), - (vm_page_t *)bp->b_pages, bp->b_npages); + if ((bp->b_flags & B_UNMAPPED) == 0) { + pmap_qenter(trunc_page((vm_offset_t) bp->b_data), + (vm_page_t *)bp->b_pages, bp->b_npages); + } return (bp); } @@ -523,7 +519,10 @@ cluster_callback(bp) if (bp->b_ioflags & BIO_ERROR) error = bp->b_error; - pmap_qremove(trunc_page((vm_offset_t) bp->b_data), bp->b_npages); + if ((bp->b_flags & B_UNMAPPED) == 0) { + pmap_qremove(trunc_page((vm_offset_t) bp->b_data), + bp->b_npages); + } /* * Move memory from the large cluster buffer into the component * buffers and mark IO as done on these. @@ -565,18 +564,19 @@ cluster_callback(bp) */ static __inline int -cluster_wbuild_wb(struct vnode *vp, long size, daddr_t start_lbn, int len) +cluster_wbuild_wb(struct vnode *vp, long size, daddr_t start_lbn, int len, + int gbflags) { int r = 0; - switch(write_behind) { + switch (write_behind) { case 2: if (start_lbn < len) break; start_lbn -= len; /* FALLTHROUGH */ case 1: - r = cluster_wbuild(vp, size, start_lbn, len); + r = cluster_wbuild(vp, size, start_lbn, len, gbflags); /* FALLTHROUGH */ default: /* FALLTHROUGH */ @@ -596,7 +596,8 @@ cluster_wbuild_wb(struct vnode *vp, long size, daddr_t start_lbn, int len) * 4. end of a cluster - asynchronously write cluster */ void -cluster_write(struct vnode *vp, struct buf *bp, u_quad_t filesize, int seqcount) +cluster_write(struct vnode *vp, struct buf *bp, u_quad_t filesize, int seqcount, + int gbflags) { daddr_t lbn; int maxclen, cursize; @@ -642,13 +643,13 @@ cluster_write(struct vnode *vp, struct buf *bp, u_quad_t filesize, int seqcount) lbn != vp->v_lastw + 1 || vp->v_clen <= cursize) { if (!async && seqcount > 0) { cluster_wbuild_wb(vp, lblocksize, - vp->v_cstart, cursize); + vp->v_cstart, cursize, gbflags); } } else { struct buf **bpp, **endbp; struct cluster_save *buflist; - buflist = cluster_collectbufs(vp, bp); + buflist = cluster_collectbufs(vp, bp, gbflags); endbp = &buflist->bs_children [buflist->bs_nchildren - 1]; if (VOP_REALLOCBLKS(vp, buflist)) { @@ -667,7 +668,7 @@ cluster_write(struct vnode *vp, struct buf *bp, u_quad_t filesize, int seqcount) if (seqcount > 1) { cluster_wbuild_wb(vp, lblocksize, vp->v_cstart, - cursize); + cursize, gbflags); } } else { /* @@ -715,8 +716,10 @@ cluster_write(struct vnode *vp, struct buf *bp, u_quad_t filesize, int seqcount) * update daemon handle it. */ bdwrite(bp); - if (seqcount > 1) - cluster_wbuild_wb(vp, lblocksize, vp->v_cstart, vp->v_clen + 1); + if (seqcount > 1) { + cluster_wbuild_wb(vp, lblocksize, vp->v_cstart, + vp->v_clen + 1, gbflags); + } vp->v_clen = 0; vp->v_cstart = lbn + 1; } else if (vm_page_count_severe()) { @@ -742,11 +745,8 @@ cluster_write(struct vnode *vp, struct buf *bp, u_quad_t filesize, int seqcount) * the current block (if last_bp == NULL). */ int -cluster_wbuild(vp, size, start_lbn, len) - struct vnode *vp; - long size; - daddr_t start_lbn; - int len; +cluster_wbuild(struct vnode *vp, long size, daddr_t start_lbn, int len, + int gbflags) { struct buf *bp, *tbp; struct bufobj *bo; @@ -832,10 +832,16 @@ cluster_wbuild(vp, size, start_lbn, len) * address may not be either. Inherit the b_data offset * from the original buffer. */ - bp->b_data = (char *)((vm_offset_t)bp->b_data | - ((vm_offset_t)tbp->b_data & PAGE_MASK)); - bp->b_flags |= B_CLUSTER | - (tbp->b_flags & (B_VMIO | B_NEEDCOMMIT)); + if ((gbflags & GB_UNMAPPED) == 0 || + (tbp->b_flags & B_VMIO) == 0) { + bp->b_data = (char *)((vm_offset_t)bp->b_data | + ((vm_offset_t)tbp->b_data & PAGE_MASK)); + } else { + bp->b_flags |= B_UNMAPPED; + bp->b_data = unmapped_buf; + } + bp->b_flags |= B_CLUSTER | (tbp->b_flags & (B_VMIO | + B_NEEDCOMMIT)); bp->b_iodone = cluster_callback; pbgetvp(vp, bp); /* @@ -962,8 +968,10 @@ cluster_wbuild(vp, size, start_lbn, len) tbp, b_cluster.cluster_entry); } finishcluster: - pmap_qenter(trunc_page((vm_offset_t) bp->b_data), - (vm_page_t *) bp->b_pages, bp->b_npages); + if ((bp->b_flags & B_UNMAPPED) == 0) { + pmap_qenter(trunc_page((vm_offset_t) bp->b_data), + (vm_page_t *)bp->b_pages, bp->b_npages); + } if (bp->b_bufsize > bp->b_kvasize) panic( "cluster_wbuild: b_bufsize(%ld) > b_kvasize(%d)\n", @@ -984,9 +992,7 @@ cluster_wbuild(vp, size, start_lbn, len) * Plus add one additional buffer. */ static struct cluster_save * -cluster_collectbufs(vp, last_bp) - struct vnode *vp; - struct buf *last_bp; +cluster_collectbufs(struct vnode *vp, struct buf *last_bp, int gbflags) { struct cluster_save *buflist; struct buf *bp; @@ -999,7 +1005,8 @@ cluster_collectbufs(vp, last_bp) buflist->bs_nchildren = 0; buflist->bs_children = (struct buf **) (buflist + 1); for (lbn = vp->v_cstart, i = 0; i < len; lbn++, i++) { - (void) bread(vp, lbn, last_bp->b_bcount, NOCRED, &bp); + (void)bread_gb(vp, lbn, last_bp->b_bcount, NOCRED, + gbflags, &bp); buflist->bs_children[i] = bp; if (bp->b_blkno == bp->b_lblkno) VOP_BMAP(vp, bp->b_lblkno, NULL, &bp->b_blkno, diff --git a/sys/kern/vfs_vnops.c b/sys/kern/vfs_vnops.c index 32c0978..6b28580 100644 --- a/sys/kern/vfs_vnops.c +++ b/sys/kern/vfs_vnops.c @@ -1121,6 +1121,45 @@ vn_io_fault_uiomove(char *data, int xfersize, struct uio *uio) return (error); } +int +vn_io_fault_pgmove(vm_page_t ma[], vm_offset_t offset, int xfersize, + struct uio *uio) +{ + struct thread *td; + vm_offset_t iov_base; + int cnt, pgadv; + + td = curthread; + if ((td->td_pflags & TDP_UIOHELD) == 0 || + uio->uio_segflg != UIO_USERSPACE) + return (uiomove_fromphys(ma, offset, xfersize, uio)); + + KASSERT(uio->uio_iovcnt == 1, ("uio_iovcnt %d", uio->uio_iovcnt)); + cnt = xfersize > uio->uio_resid ? uio->uio_resid : xfersize; + iov_base = (vm_offset_t)uio->uio_iov->iov_base; + switch (uio->uio_rw) { + case UIO_WRITE: + pmap_copy_pages(td->td_ma, iov_base & PAGE_MASK, ma, + offset, cnt); + break; + case UIO_READ: + pmap_copy_pages(ma, offset, td->td_ma, iov_base & PAGE_MASK, + cnt); + break; + } + pgadv = ((iov_base + cnt) >> PAGE_SHIFT) - (iov_base >> PAGE_SHIFT); + td->td_ma += pgadv; + KASSERT(td->td_ma_cnt >= pgadv, ("consumed pages %d %d", td->td_ma_cnt, + pgadv)); + td->td_ma_cnt -= pgadv; + uio->uio_iov->iov_base = (char *)(iov_base + cnt); + uio->uio_iov->iov_len -= cnt; + uio->uio_resid -= cnt; + uio->uio_offset += cnt; + return (0); +} + + /* * File table truncate routine. */ diff --git a/sys/mips/mips/pmap.c b/sys/mips/mips/pmap.c index 7925b8c..4fdc88d 100644 --- a/sys/mips/mips/pmap.c +++ b/sys/mips/mips/pmap.c @@ -2576,6 +2576,51 @@ pmap_copy_page(vm_page_t src, vm_page_t dst) } } +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + char *a_cp, *b_cp; + vm_page_t a_m, b_m; + vm_offset_t a_pg_offset, b_pg_offset; + vm_paddr_t a_phys, b_phys; + int cnt; + + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + a_m = ma[a_offset >> PAGE_SHIFT]; + a_phys = VM_PAGE_TO_PHYS(a_m); + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + b_m = mb[b_offset >> PAGE_SHIFT]; + b_phys = VM_PAGE_TO_PHYS(b_m); + if (MIPS_DIRECT_MAPPABLE(a_phys) && + MIPS_DIRECT_MAPPABLE(b_phys)) { + pmap_flush_pvcache(a_m); + mips_dcache_wbinv_range_index( + MIPS_PHYS_TO_DIRECT(b_phys), PAGE_SIZE); + a_cp = (char *)MIPS_PHYS_TO_DIRECT(a_phys) + + a_pg_offset; + b_cp = (char *)MIPS_PHYS_TO_DIRECT(b_phys) + + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + mips_dcache_wbinv_range((vm_offset_t)b_cp, cnt); + } else { + a_cp = (char *)pmap_lmem_map2(a_phys, b_phys); + b_cp = (char *)a_cp + PAGE_SIZE; + a_cp += a_pg_offset; + b_cp += b_pg_offset; + bcopy(a_cp, b_cp, cnt); + mips_dcache_wbinv_range((vm_offset_t)b_cp, cnt); + pmap_lmem_unmap(); + } + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } +} + /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may diff --git a/sys/powerpc/aim/mmu_oea.c b/sys/powerpc/aim/mmu_oea.c index b173760..9b496ac 100644 --- a/sys/powerpc/aim/mmu_oea.c +++ b/sys/powerpc/aim/mmu_oea.c @@ -276,6 +276,8 @@ void moea_change_wiring(mmu_t, pmap_t, vm_offset_t, boolean_t); void moea_clear_modify(mmu_t, vm_page_t); void moea_clear_reference(mmu_t, vm_page_t); void moea_copy_page(mmu_t, vm_page_t, vm_page_t); +void moea_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize); void moea_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, boolean_t); void moea_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_page_t, vm_prot_t); @@ -321,6 +323,7 @@ static mmu_method_t moea_methods[] = { MMUMETHOD(mmu_clear_modify, moea_clear_modify), MMUMETHOD(mmu_clear_reference, moea_clear_reference), MMUMETHOD(mmu_copy_page, moea_copy_page), + MMUMETHOD(mmu_copy_pages, moea_copy_pages), MMUMETHOD(mmu_enter, moea_enter), MMUMETHOD(mmu_enter_object, moea_enter_object), MMUMETHOD(mmu_enter_quick, moea_enter_quick), @@ -1044,6 +1047,30 @@ moea_copy_page(mmu_t mmu, vm_page_t msrc, vm_page_t mdst) bcopy((void *)src, (void *)dst, PAGE_SIZE); } +void +moea_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize) +{ + void *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + a_cp = (char *)VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT]) + + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + b_cp = (char *)VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT]) + + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } +} + /* * Zero a page of physical memory by temporarily mapping it into the tlb. */ diff --git a/sys/powerpc/aim/mmu_oea64.c b/sys/powerpc/aim/mmu_oea64.c index 00dab9b..a7bacf4 100644 --- a/sys/powerpc/aim/mmu_oea64.c +++ b/sys/powerpc/aim/mmu_oea64.c @@ -291,6 +291,8 @@ void moea64_change_wiring(mmu_t, pmap_t, vm_offset_t, boolean_t); void moea64_clear_modify(mmu_t, vm_page_t); void moea64_clear_reference(mmu_t, vm_page_t); void moea64_copy_page(mmu_t, vm_page_t, vm_page_t); +void moea64_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize); void moea64_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, boolean_t); void moea64_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_page_t, vm_prot_t); @@ -335,6 +337,7 @@ static mmu_method_t moea64_methods[] = { MMUMETHOD(mmu_clear_modify, moea64_clear_modify), MMUMETHOD(mmu_clear_reference, moea64_clear_reference), MMUMETHOD(mmu_copy_page, moea64_copy_page), + MMUMETHOD(mmu_copy_pages, moea64_copy_pages), MMUMETHOD(mmu_enter, moea64_enter), MMUMETHOD(mmu_enter_object, moea64_enter_object), MMUMETHOD(mmu_enter_quick, moea64_enter_quick), @@ -1105,6 +1108,72 @@ moea64_copy_page(mmu_t mmu, vm_page_t msrc, vm_page_t mdst) } } +static inline void +moea64_copy_pages_dmap(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize) +{ + void *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + a_cp = (char *)VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT]) + + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + b_cp = (char *)VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT]) + + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } +} + +static inline void +moea64_copy_pages_nodmap(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize) +{ + void *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + mtx_lock(&moea64_scratchpage_mtx); + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + moea64_set_scratchpage_pa(mmu, 0, + VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT])); + a_cp = (char *)moea64_scratchpage_va[0] + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + moea64_set_scratchpage_pa(mmu, 1, + VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT])); + b_cp = (char *)moea64_scratchpage_va[1] + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } + mtx_unlock(&moea64_scratchpage_mtx); +} + +void +moea64_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize) +{ + + if (hw_direct_map) { + moea64_copy_pages_dmap(mmu, ma, a_offset, mb, b_offset, + xfersize); + } else { + moea64_copy_pages_nodmap(mmu, ma, a_offset, mb, b_offset, + xfersize); + } +} + void moea64_zero_page_area(mmu_t mmu, vm_page_t m, int off, int size) { diff --git a/sys/powerpc/booke/pmap.c b/sys/powerpc/booke/pmap.c index f6e5f9c..233e1e0 100644 --- a/sys/powerpc/booke/pmap.c +++ b/sys/powerpc/booke/pmap.c @@ -275,6 +275,8 @@ static void mmu_booke_clear_reference(mmu_t, vm_page_t); static void mmu_booke_copy(mmu_t, pmap_t, pmap_t, vm_offset_t, vm_size_t, vm_offset_t); static void mmu_booke_copy_page(mmu_t, vm_page_t, vm_page_t); +static void mmu_booke_copy_pages(mmu_t, vm_page_t *, + vm_offset_t, vm_page_t *, vm_offset_t, int); static void mmu_booke_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, boolean_t); static void mmu_booke_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, @@ -335,6 +337,7 @@ static mmu_method_t mmu_booke_methods[] = { MMUMETHOD(mmu_clear_reference, mmu_booke_clear_reference), MMUMETHOD(mmu_copy, mmu_booke_copy), MMUMETHOD(mmu_copy_page, mmu_booke_copy_page), + MMUMETHOD(mmu_copy_pages, mmu_booke_copy_pages), MMUMETHOD(mmu_enter, mmu_booke_enter), MMUMETHOD(mmu_enter_object, mmu_booke_enter_object), MMUMETHOD(mmu_enter_quick, mmu_booke_enter_quick), @@ -2138,6 +2141,36 @@ mmu_booke_copy_page(mmu_t mmu, vm_page_t sm, vm_page_t dm) mtx_unlock(©_page_mutex); } +static inline void +mmu_booke_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, + vm_page_t *mb, vm_offset_t b_offset, int xfersize) +{ + void *a_cp, *b_cp; + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + mtx_lock(©_page_mutex); + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + mmu_booke_kenter(mmu, copy_page_src_va, + VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT])); + a_cp = (char *)copy_page_src_va + a_pg_offset; + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + mmu_booke_kenter(mmu, copy_page_dst_va, + VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT])); + b_cp = (char *)copy_page_dst_va + b_pg_offset; + bcopy(a_cp, b_cp, cnt); + mmu_booke_kremove(mmu, copy_page_dst_va); + mmu_booke_kremove(mmu, copy_page_src_va); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } + mtx_unlock(©_page_mutex); +} + /* * mmu_booke_zero_page_idle zeros the specified hardware page by mapping it * into virtual memory and using bzero to clear its contents. This is intended diff --git a/sys/powerpc/powerpc/mmu_if.m b/sys/powerpc/powerpc/mmu_if.m index 8cd6e52..0382bd8 100644 --- a/sys/powerpc/powerpc/mmu_if.m +++ b/sys/powerpc/powerpc/mmu_if.m @@ -215,6 +215,14 @@ METHOD void copy_page { vm_page_t _dst; }; +METHOD void copy_pages { + mmu_t _mmu; + vm_page_t *_ma; + vm_offset_t _a_offset; + vm_page_t *_mb; + vm_offset_t _b_offset; + int _xfersize; +}; /** * @brief Create a mapping between a virtual/physical address pair in the diff --git a/sys/powerpc/powerpc/pmap_dispatch.c b/sys/powerpc/powerpc/pmap_dispatch.c index c919196..42f1a39 100644 --- a/sys/powerpc/powerpc/pmap_dispatch.c +++ b/sys/powerpc/powerpc/pmap_dispatch.c @@ -133,6 +133,16 @@ pmap_copy_page(vm_page_t src, vm_page_t dst) } void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + + CTR6(KTR_PMAP, "%s(%p, %#x, %p, %#x, %#x)", __func__, ma, + a_offset, mb, b_offset, xfersize); + MMU_COPY_PAGES(mmu_obj, ma, a_offset, mb, b_offset, xfersize); +} + +void pmap_enter(pmap_t pmap, vm_offset_t va, vm_prot_t access, vm_page_t p, vm_prot_t prot, boolean_t wired) { diff --git a/sys/sparc64/sparc64/pmap.c b/sys/sparc64/sparc64/pmap.c index 08f008c..27947dd 100644 --- a/sys/sparc64/sparc64/pmap.c +++ b/sys/sparc64/sparc64/pmap.c @@ -1835,8 +1835,9 @@ pmap_zero_page_idle(vm_page_t m) } } -void -pmap_copy_page(vm_page_t msrc, vm_page_t mdst) +static void +pmap_copy_page_offs(vm_page_t msrc, int src_off, vm_page_t mdst, int dst_off, + int cnt) { vm_offset_t vdst; vm_offset_t vsrc; @@ -1857,16 +1858,17 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) PMAP_STATS_INC(pmap_ncopy_page_c); vdst = TLB_PHYS_TO_DIRECT(pdst); vsrc = TLB_PHYS_TO_DIRECT(psrc); - cpu_block_copy((void *)vsrc, (void *)vdst, PAGE_SIZE); + cpu_block_copy((char *)vsrc + src_off, (char *)vdst + dst_off, + cnt); } else if (msrc->md.color == -1 && mdst->md.color == -1) { PMAP_STATS_INC(pmap_ncopy_page_nc); - ascopy(ASI_PHYS_USE_EC, psrc, pdst, PAGE_SIZE); + ascopy(ASI_PHYS_USE_EC, psrc + src_off, pdst + dst_off, cnt); } else if (msrc->md.color == -1) { if (mdst->md.color == DCACHE_COLOR(pdst)) { PMAP_STATS_INC(pmap_ncopy_page_dc); vdst = TLB_PHYS_TO_DIRECT(pdst); - ascopyfrom(ASI_PHYS_USE_EC, psrc, (void *)vdst, - PAGE_SIZE); + ascopyfrom(ASI_PHYS_USE_EC, psrc + src_off, + (char *)vdst + dst_off, cnt); } else { PMAP_STATS_INC(pmap_ncopy_page_doc); PMAP_LOCK(kernel_pmap); @@ -1875,8 +1877,8 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) tp->tte_data = TD_V | TD_8K | TD_PA(pdst) | TD_CP | TD_CV | TD_W; tp->tte_vpn = TV_VPN(vdst, TS_8K); - ascopyfrom(ASI_PHYS_USE_EC, psrc, (void *)vdst, - PAGE_SIZE); + ascopyfrom(ASI_PHYS_USE_EC, psrc + src_off, + (char *)vdst + dst_off, cnt); tlb_page_demap(kernel_pmap, vdst); PMAP_UNLOCK(kernel_pmap); } @@ -1884,8 +1886,8 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) if (msrc->md.color == DCACHE_COLOR(psrc)) { PMAP_STATS_INC(pmap_ncopy_page_sc); vsrc = TLB_PHYS_TO_DIRECT(psrc); - ascopyto((void *)vsrc, ASI_PHYS_USE_EC, pdst, - PAGE_SIZE); + ascopyto((char *)vsrc + src_off, ASI_PHYS_USE_EC, + pdst + dst_off, cnt); } else { PMAP_STATS_INC(pmap_ncopy_page_soc); PMAP_LOCK(kernel_pmap); @@ -1894,8 +1896,8 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) tp->tte_data = TD_V | TD_8K | TD_PA(psrc) | TD_CP | TD_CV | TD_W; tp->tte_vpn = TV_VPN(vsrc, TS_8K); - ascopyto((void *)vsrc, ASI_PHYS_USE_EC, pdst, - PAGE_SIZE); + ascopyto((char *)vsrc + src_off, ASI_PHYS_USE_EC, + pdst + dst_off, cnt); tlb_page_demap(kernel_pmap, vsrc); PMAP_UNLOCK(kernel_pmap); } @@ -1912,13 +1914,41 @@ pmap_copy_page(vm_page_t msrc, vm_page_t mdst) tp->tte_data = TD_V | TD_8K | TD_PA(psrc) | TD_CP | TD_CV | TD_W; tp->tte_vpn = TV_VPN(vsrc, TS_8K); - cpu_block_copy((void *)vsrc, (void *)vdst, PAGE_SIZE); + cpu_block_copy((char *)vsrc + src_off, (char *)vdst + dst_off, + cnt); tlb_page_demap(kernel_pmap, vdst); tlb_page_demap(kernel_pmap, vsrc); PMAP_UNLOCK(kernel_pmap); } } +void +pmap_copy_page(vm_page_t msrc, vm_page_t mdst) +{ + + pmap_copy_page_offs(msrc, 0, mdst, 0, PAGE_SIZE); +} + +void +pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], + vm_offset_t b_offset, int xfersize) +{ + vm_offset_t a_pg_offset, b_pg_offset; + int cnt; + + while (xfersize > 0) { + a_pg_offset = a_offset & PAGE_MASK; + cnt = min(xfersize, PAGE_SIZE - a_pg_offset); + b_pg_offset = b_offset & PAGE_MASK; + cnt = min(cnt, PAGE_SIZE - b_pg_offset); + pmap_copy_page_offs(ma[a_offset >> PAGE_SHIFT], a_pg_offset, + mb[b_offset >> PAGE_SHIFT], b_pg_offset, cnt); + a_offset += cnt; + b_offset += cnt; + xfersize -= cnt; + } +} + /* * Returns true if the pmap's pv is one of the first * 16 pvs linked to from this page. This count may diff --git a/sys/sys/bio.h b/sys/sys/bio.h index c016ee6..7678f5a 100644 --- a/sys/sys/bio.h +++ b/sys/sys/bio.h @@ -55,10 +55,13 @@ #define BIO_DONE 0x02 #define BIO_ONQUEUE 0x04 #define BIO_ORDERED 0x08 +#define BIO_UNMAPPED 0x10 +#define BIO_TRANSIENT_MAPPING 0x20 #ifdef _KERNEL struct disk; struct bio; +struct vm_map; /* Empty classifier tag, to prevent further classification. */ #define BIO_NOTCLASSIFIED (void *)(~0UL) @@ -78,6 +81,9 @@ struct bio { off_t bio_offset; /* Offset into file. */ long bio_bcount; /* Valid bytes in buffer. */ caddr_t bio_data; /* Memory, superblocks, indirect etc. */ + struct vm_page **bio_ma; /* Or unmapped. */ + int bio_ma_offset; /* Offset in the first page of bio_ma. */ + int bio_ma_n; /* Number of pages in bio_ma. */ int bio_error; /* Errno for BIO_ERROR. */ long bio_resid; /* Remaining I/O in bytes. */ void (*bio_done)(struct bio *); @@ -121,6 +127,9 @@ struct bio_queue_head { struct bio *insert_point; }; +extern struct vm_map *bio_transient_map; +extern int bio_transient_maxcnt; + void biodone(struct bio *bp); void biofinish(struct bio *bp, struct devstat *stat, int error); int biowait(struct bio *bp, const char *wchan); diff --git a/sys/sys/buf.h b/sys/sys/buf.h index 672ef5a..0c7a6f4 100644 --- a/sys/sys/buf.h +++ b/sys/sys/buf.h @@ -117,6 +117,7 @@ struct buf { long b_bufsize; /* Allocated buffer size. */ long b_runningbufspace; /* when I/O is running, pipelining */ caddr_t b_kvabase; /* base kva for buffer */ + caddr_t b_kvaalloc; /* allocated kva for B_KVAALLOC */ int b_kvasize; /* size of kva for buffer */ daddr_t b_lblkno; /* Logical block number. */ struct vnode *b_vp; /* Device vnode. */ @@ -202,8 +203,8 @@ struct buf { #define B_PERSISTENT 0x00000100 /* Perm. ref'ed while EXT2FS mounted. */ #define B_DONE 0x00000200 /* I/O completed. */ #define B_EINTR 0x00000400 /* I/O was interrupted */ -#define B_00000800 0x00000800 /* Available flag. */ -#define B_00001000 0x00001000 /* Available flag. */ +#define B_UNMAPPED 0x00000800 /* KVA is not mapped. */ +#define B_KVAALLOC 0x00001000 /* But allocated. */ #define B_INVAL 0x00002000 /* Does not contain valid info. */ #define B_BARRIER 0x00004000 /* Write this and all preceeding first. */ #define B_NOCACHE 0x00008000 /* Do not cache block after use. */ @@ -453,7 +454,9 @@ buf_countdeps(struct buf *bp, int i) */ #define GB_LOCK_NOWAIT 0x0001 /* Fail if we block on a buf lock. */ #define GB_NOCREAT 0x0002 /* Don't create a buf if not found. */ -#define GB_NOWAIT_BD 0x0004 /* Do not wait for bufdaemon */ +#define GB_NOWAIT_BD 0x0004 /* Do not wait for bufdaemon. */ +#define GB_UNMAPPED 0x0008 /* Do not mmap buffer pages. */ +#define GB_KVAALLOC 0x0010 /* But allocate KVA. */ #ifdef _KERNEL extern int nbuf; /* The number of buffer headers */ @@ -470,17 +473,22 @@ extern struct buf *swbuf; /* Swap I/O buffer headers. */ extern int nswbuf; /* Number of swap I/O buffer headers. */ extern int cluster_pbuf_freecnt; /* Number of pbufs for clusters */ extern int vnode_pbuf_freecnt; /* Number of pbufs for vnode pager */ +extern caddr_t unmapped_buf; void runningbufwakeup(struct buf *); void waitrunningbufspace(void); caddr_t kern_vfs_bio_buffer_alloc(caddr_t v, long physmem_est); void bufinit(void); +void bdata2bio(struct buf *bp, struct bio *bip); void bwillwrite(void); int buf_dirty_count_severe(void); void bremfree(struct buf *); void bremfreef(struct buf *); /* XXX Force bremfree, only for nfs. */ #define bread(vp, blkno, size, cred, bpp) \ - breadn_flags(vp, blkno, size, 0, 0, 0, cred, 0, bpp) + breadn_flags(vp, blkno, size, NULL, NULL, 0, cred, 0, bpp) +#define bread_gb(vp, blkno, size, cred, gbflags, bpp) \ + breadn_flags(vp, blkno, size, NULL, NULL, 0, cred, \ + gbflags, bpp) #define breadn(vp, blkno, size, rablkno, rabsize, cnt, cred, bpp) \ breadn_flags(vp, blkno, size, rablkno, rabsize, cnt, cred, 0, bpp) int breadn_flags(struct vnode *, daddr_t, int, daddr_t *, int *, int, @@ -508,14 +516,15 @@ void bufdone_finish(struct buf *); void bd_speedup(void); int cluster_read(struct vnode *, u_quad_t, daddr_t, long, - struct ucred *, long, int, struct buf **); -int cluster_wbuild(struct vnode *, long, daddr_t, int); -void cluster_write(struct vnode *, struct buf *, u_quad_t, int); + struct ucred *, long, int, int, struct buf **); +int cluster_wbuild(struct vnode *, long, daddr_t, int, int); +void cluster_write(struct vnode *, struct buf *, u_quad_t, int, int); +void vfs_bio_bzero_buf(struct buf *bp, int base, int size); void vfs_bio_set_valid(struct buf *, int base, int size); void vfs_bio_clrbuf(struct buf *); void vfs_busy_pages(struct buf *, int clear_modify); void vfs_unbusy_pages(struct buf *); -int vmapbuf(struct buf *); +int vmapbuf(struct buf *, int); void vunmapbuf(struct buf *); void relpbuf(struct buf *, int *); void brelvp(struct buf *); diff --git a/sys/sys/mount.h b/sys/sys/mount.h index bbbc569..f8e7662 100644 --- a/sys/sys/mount.h +++ b/sys/sys/mount.h @@ -351,6 +351,7 @@ void __mnt_vnode_markerfree_active(struct vnode **mvp, struct mount *); #define MNTK_VGONE_WAITER 0x00000400 #define MNTK_LOOKUP_EXCL_DOTDOT 0x00000800 #define MNTK_MARKER 0x00001000 +#define MNTK_UNMAPPED_BUFS 0x00002000 #define MNTK_NOASYNC 0x00800000 /* disable async */ #define MNTK_UNMOUNT 0x01000000 /* unmount in progress */ #define MNTK_MWAIT 0x02000000 /* waiting for unmount to finish */ diff --git a/sys/sys/vnode.h b/sys/sys/vnode.h index b54dc04..e6a41a4 100644 --- a/sys/sys/vnode.h +++ b/sys/sys/vnode.h @@ -692,6 +692,8 @@ int vn_vget_ino(struct vnode *vp, ino_t ino, int lkflags, struct vnode **rvp); int vn_io_fault_uiomove(char *data, int xfersize, struct uio *uio); +int vn_io_fault_pgmove(vm_page_t ma[], vm_offset_t offset, int xfersize, + struct uio *uio); #define vn_rangelock_unlock(vp, cookie) \ rangelock_unlock(&(vp)->v_rl, (cookie), VI_MTX(vp)) diff --git a/sys/ufs/ffs/ffs_alloc.c b/sys/ufs/ffs/ffs_alloc.c index abe4073..0bdbbae 100644 --- a/sys/ufs/ffs/ffs_alloc.c +++ b/sys/ufs/ffs/ffs_alloc.c @@ -254,7 +254,7 @@ ffs_realloccg(ip, lbprev, bprev, bpref, osize, nsize, flags, cred, bpp) struct buf *bp; struct ufsmount *ump; u_int cg, request, reclaimed; - int error; + int error, gbflags; ufs2_daddr_t bno; static struct timeval lastfail; static int curfail; @@ -265,6 +265,8 @@ ffs_realloccg(ip, lbprev, bprev, bpref, osize, nsize, flags, cred, bpp) fs = ip->i_fs; bp = NULL; ump = ip->i_ump; + gbflags = (flags & BA_UNMAPPED) != 0 ? GB_UNMAPPED : 0; + mtx_assert(UFS_MTX(ump), MA_OWNED); #ifdef INVARIANTS if (vp->v_mount->mnt_kern_flag & MNTK_SUSPENDED) @@ -296,7 +298,7 @@ retry: /* * Allocate the extra space in the buffer. */ - error = bread(vp, lbprev, osize, NOCRED, &bp); + error = bread_gb(vp, lbprev, osize, NOCRED, gbflags, &bp); if (error) { brelse(bp); return (error); @@ -332,7 +334,7 @@ retry: ip->i_flag |= IN_CHANGE | IN_UPDATE; allocbuf(bp, nsize); bp->b_flags |= B_DONE; - bzero(bp->b_data + osize, nsize - osize); + vfs_bio_bzero_buf(bp, osize, nsize - osize); if ((bp->b_flags & (B_MALLOC | B_VMIO)) == B_VMIO) vfs_bio_set_valid(bp, osize, nsize - osize); *bpp = bp; @@ -400,7 +402,7 @@ retry: ip->i_flag |= IN_CHANGE | IN_UPDATE; allocbuf(bp, nsize); bp->b_flags |= B_DONE; - bzero(bp->b_data + osize, nsize - osize); + vfs_bio_bzero_buf(bp, osize, nsize - osize); if ((bp->b_flags & (B_MALLOC | B_VMIO)) == B_VMIO) vfs_bio_set_valid(bp, osize, nsize - osize); *bpp = bp; diff --git a/sys/ufs/ffs/ffs_balloc.c b/sys/ufs/ffs/ffs_balloc.c index 0e29be87f..d20df77 100644 --- a/sys/ufs/ffs/ffs_balloc.c +++ b/sys/ufs/ffs/ffs_balloc.c @@ -107,7 +107,7 @@ ffs_balloc_ufs1(struct vnode *vp, off_t startoffset, int size, int saved_inbdflush; static struct timeval lastfail; static int curfail; - int reclaimed; + int gbflags, reclaimed; ip = VTOI(vp); dp = ip->i_din1; @@ -123,6 +123,7 @@ ffs_balloc_ufs1(struct vnode *vp, off_t startoffset, int size, return (EOPNOTSUPP); if (lbn < 0) return (EFBIG); + gbflags = (flags & BA_UNMAPPED) != 0 ? GB_UNMAPPED : 0; if (DOINGSOFTDEP(vp)) softdep_prealloc(vp, MNT_WAIT); @@ -211,7 +212,7 @@ ffs_balloc_ufs1(struct vnode *vp, off_t startoffset, int size, nsize, flags, cred, &newb); if (error) return (error); - bp = getblk(vp, lbn, nsize, 0, 0, 0); + bp = getblk(vp, lbn, nsize, 0, 0, gbflags); bp->b_blkno = fsbtodb(fs, newb); if (flags & BA_CLRBUF) vfs_bio_clrbuf(bp); @@ -255,7 +256,7 @@ ffs_balloc_ufs1(struct vnode *vp, off_t startoffset, int size, nb = newb; *allocblk++ = nb; *lbns_remfree++ = indirs[1].in_lbn; - bp = getblk(vp, indirs[1].in_lbn, fs->fs_bsize, 0, 0, 0); + bp = getblk(vp, indirs[1].in_lbn, fs->fs_bsize, 0, 0, gbflags); bp->b_blkno = fsbtodb(fs, nb); vfs_bio_clrbuf(bp); if (DOINGSOFTDEP(vp)) { @@ -389,7 +390,7 @@ retry: nb = newb; *allocblk++ = nb; *lbns_remfree++ = lbn; - nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, 0); + nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, gbflags); nbp->b_blkno = fsbtodb(fs, nb); if (flags & BA_CLRBUF) vfs_bio_clrbuf(nbp); @@ -418,16 +419,17 @@ retry: if (seqcount && (vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { error = cluster_read(vp, ip->i_size, lbn, (int)fs->fs_bsize, NOCRED, - MAXBSIZE, seqcount, &nbp); + MAXBSIZE, seqcount, gbflags, &nbp); } else { - error = bread(vp, lbn, (int)fs->fs_bsize, NOCRED, &nbp); + error = bread_gb(vp, lbn, (int)fs->fs_bsize, NOCRED, + gbflags, &nbp); } if (error) { brelse(nbp); goto fail; } } else { - nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, 0); + nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, gbflags); nbp->b_blkno = fsbtodb(fs, nb); } curthread_pflags_restore(saved_inbdflush); @@ -539,7 +541,7 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, int saved_inbdflush; static struct timeval lastfail; static int curfail; - int reclaimed; + int gbflags, reclaimed; ip = VTOI(vp); dp = ip->i_din2; @@ -553,6 +555,7 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, *bpp = NULL; if (lbn < 0) return (EFBIG); + gbflags = (flags & BA_UNMAPPED) != 0 ? GB_UNMAPPED : 0; if (DOINGSOFTDEP(vp)) softdep_prealloc(vp, MNT_WAIT); @@ -603,7 +606,8 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, panic("ffs_balloc_ufs2: BA_METAONLY for ext block"); nb = dp->di_extb[lbn]; if (nb != 0 && dp->di_extsize >= smalllblktosize(fs, lbn + 1)) { - error = bread(vp, -1 - lbn, fs->fs_bsize, NOCRED, &bp); + error = bread_gb(vp, -1 - lbn, fs->fs_bsize, NOCRED, + gbflags, &bp); if (error) { brelse(bp); return (error); @@ -620,7 +624,8 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, osize = fragroundup(fs, blkoff(fs, dp->di_extsize)); nsize = fragroundup(fs, size); if (nsize <= osize) { - error = bread(vp, -1 - lbn, osize, NOCRED, &bp); + error = bread_gb(vp, -1 - lbn, osize, NOCRED, + gbflags, &bp); if (error) { brelse(bp); return (error); @@ -653,7 +658,7 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, nsize, flags, cred, &newb); if (error) return (error); - bp = getblk(vp, -1 - lbn, nsize, 0, 0, 0); + bp = getblk(vp, -1 - lbn, nsize, 0, 0, gbflags); bp->b_blkno = fsbtodb(fs, newb); bp->b_xflags |= BX_ALTDATA; if (flags & BA_CLRBUF) @@ -679,9 +684,9 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, if (osize < fs->fs_bsize && osize > 0) { UFS_LOCK(ump); error = ffs_realloccg(ip, nb, dp->di_db[nb], - ffs_blkpref_ufs2(ip, lastlbn, (int)nb, - &dp->di_db[0]), osize, (int)fs->fs_bsize, - flags, cred, &bp); + ffs_blkpref_ufs2(ip, lastlbn, (int)nb, + &dp->di_db[0]), osize, (int)fs->fs_bsize, + flags, cred, &bp); if (error) return (error); if (DOINGSOFTDEP(vp)) @@ -707,7 +712,8 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, panic("ffs_balloc_ufs2: BA_METAONLY for direct block"); nb = dp->di_db[lbn]; if (nb != 0 && ip->i_size >= smalllblktosize(fs, lbn + 1)) { - error = bread(vp, lbn, fs->fs_bsize, NOCRED, &bp); + error = bread_gb(vp, lbn, fs->fs_bsize, NOCRED, + gbflags, &bp); if (error) { brelse(bp); return (error); @@ -723,7 +729,8 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, osize = fragroundup(fs, blkoff(fs, ip->i_size)); nsize = fragroundup(fs, size); if (nsize <= osize) { - error = bread(vp, lbn, osize, NOCRED, &bp); + error = bread_gb(vp, lbn, osize, NOCRED, + gbflags, &bp); if (error) { brelse(bp); return (error); @@ -733,7 +740,7 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, UFS_LOCK(ump); error = ffs_realloccg(ip, lbn, dp->di_db[lbn], ffs_blkpref_ufs2(ip, lbn, (int)lbn, - &dp->di_db[0]), osize, nsize, flags, + &dp->di_db[0]), osize, nsize, flags, cred, &bp); if (error) return (error); @@ -753,7 +760,7 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, &dp->di_db[0]), nsize, flags, cred, &newb); if (error) return (error); - bp = getblk(vp, lbn, nsize, 0, 0, 0); + bp = getblk(vp, lbn, nsize, 0, 0, gbflags); bp->b_blkno = fsbtodb(fs, newb); if (flags & BA_CLRBUF) vfs_bio_clrbuf(bp); @@ -797,7 +804,8 @@ ffs_balloc_ufs2(struct vnode *vp, off_t startoffset, int size, nb = newb; *allocblk++ = nb; *lbns_remfree++ = indirs[1].in_lbn; - bp = getblk(vp, indirs[1].in_lbn, fs->fs_bsize, 0, 0, 0); + bp = getblk(vp, indirs[1].in_lbn, fs->fs_bsize, 0, 0, + GB_UNMAPPED); bp->b_blkno = fsbtodb(fs, nb); vfs_bio_clrbuf(bp); if (DOINGSOFTDEP(vp)) { @@ -862,7 +870,8 @@ retry: nb = newb; *allocblk++ = nb; *lbns_remfree++ = indirs[i].in_lbn; - nbp = getblk(vp, indirs[i].in_lbn, fs->fs_bsize, 0, 0, 0); + nbp = getblk(vp, indirs[i].in_lbn, fs->fs_bsize, 0, 0, + GB_UNMAPPED); nbp->b_blkno = fsbtodb(fs, nb); vfs_bio_clrbuf(nbp); if (DOINGSOFTDEP(vp)) { @@ -931,7 +940,7 @@ retry: nb = newb; *allocblk++ = nb; *lbns_remfree++ = lbn; - nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, 0); + nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, gbflags); nbp->b_blkno = fsbtodb(fs, nb); if (flags & BA_CLRBUF) vfs_bio_clrbuf(nbp); @@ -966,16 +975,17 @@ retry: if (seqcount && (vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { error = cluster_read(vp, ip->i_size, lbn, (int)fs->fs_bsize, NOCRED, - MAXBSIZE, seqcount, &nbp); + MAXBSIZE, seqcount, gbflags, &nbp); } else { - error = bread(vp, lbn, (int)fs->fs_bsize, NOCRED, &nbp); + error = bread_gb(vp, lbn, (int)fs->fs_bsize, + NOCRED, gbflags, &nbp); } if (error) { brelse(nbp); goto fail; } } else { - nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, 0); + nbp = getblk(vp, lbn, fs->fs_bsize, 0, 0, gbflags); nbp->b_blkno = fsbtodb(fs, nb); } curthread_pflags_restore(saved_inbdflush); diff --git a/sys/ufs/ffs/ffs_rawread.c b/sys/ufs/ffs/ffs_rawread.c index f8e3e00..45cb730 100644 --- a/sys/ufs/ffs/ffs_rawread.c +++ b/sys/ufs/ffs/ffs_rawread.c @@ -240,7 +240,7 @@ ffs_rawread_readahead(struct vnode *vp, bp->b_bcount = bsize - blockoff * DEV_BSIZE; bp->b_bufsize = bp->b_bcount; - if (vmapbuf(bp) < 0) + if (vmapbuf(bp, 1) < 0) return EFAULT; maybe_yield(); @@ -259,7 +259,7 @@ ffs_rawread_readahead(struct vnode *vp, bp->b_bcount = bsize * (1 + bforwards) - blockoff * DEV_BSIZE; bp->b_bufsize = bp->b_bcount; - if (vmapbuf(bp) < 0) + if (vmapbuf(bp, 1) < 0) return EFAULT; BO_STRATEGY(&dp->v_bufobj, bp); diff --git a/sys/ufs/ffs/ffs_vfsops.c b/sys/ufs/ffs/ffs_vfsops.c index 0204613..f1a3aab 100644 --- a/sys/ufs/ffs/ffs_vfsops.c +++ b/sys/ufs/ffs/ffs_vfsops.c @@ -1076,7 +1076,7 @@ ffs_mountfs(devvp, mp, td) */ MNT_ILOCK(mp); mp->mnt_kern_flag |= MNTK_LOOKUP_SHARED | MNTK_EXTENDED_SHARED | - MNTK_NO_IOPF; + MNTK_NO_IOPF | MNTK_UNMAPPED_BUFS; MNT_IUNLOCK(mp); #ifdef UFS_EXTATTR #ifdef UFS_EXTATTR_AUTOSTART @@ -2091,6 +2091,7 @@ ffs_bufwrite(struct buf *bp) * set b_lblkno and BKGRDMARKER before calling bgetvp() * to avoid confusing the splay tree and gbincore(). */ + KASSERT((bp->b_flags & B_UNMAPPED) == 0, ("Unmapped cg")); memcpy(newbp->b_data, bp->b_data, bp->b_bufsize); newbp->b_lblkno = bp->b_lblkno; newbp->b_xflags |= BX_BKGRDMARKER; diff --git a/sys/ufs/ffs/ffs_vnops.c b/sys/ufs/ffs/ffs_vnops.c index 5c99d5b..ef6194c 100644 --- a/sys/ufs/ffs/ffs_vnops.c +++ b/sys/ufs/ffs/ffs_vnops.c @@ -508,7 +508,8 @@ ffs_read(ap) /* * Don't do readahead if this is the end of the file. */ - error = bread(vp, lbn, size, NOCRED, &bp); + error = bread_gb(vp, lbn, size, NOCRED, + GB_UNMAPPED, &bp); } else if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERR) == 0) { /* * Otherwise if we are allowed to cluster, @@ -518,7 +519,8 @@ ffs_read(ap) * doing sequential access. */ error = cluster_read(vp, ip->i_size, lbn, - size, NOCRED, blkoffset + uio->uio_resid, seqcount, &bp); + size, NOCRED, blkoffset + uio->uio_resid, + seqcount, GB_UNMAPPED, &bp); } else if (seqcount > 1) { /* * If we are NOT allowed to cluster, then @@ -529,15 +531,16 @@ ffs_read(ap) * the 6th argument. */ int nextsize = blksize(fs, ip, nextlbn); - error = breadn(vp, lbn, - size, &nextlbn, &nextsize, 1, NOCRED, &bp); + error = breadn_flags(vp, lbn, size, &nextlbn, + &nextsize, 1, NOCRED, GB_UNMAPPED, &bp); } else { /* * Failing all of the above, just read what the * user asked for. Interestingly, the same as * the first option above. */ - error = bread(vp, lbn, size, NOCRED, &bp); + error = bread_gb(vp, lbn, size, NOCRED, + GB_UNMAPPED, &bp); } if (error) { brelse(bp); @@ -568,8 +571,13 @@ ffs_read(ap) xfersize = size; } - error = vn_io_fault_uiomove((char *)bp->b_data + blkoffset, - (int)xfersize, uio); + if ((bp->b_flags & B_UNMAPPED) == 0) { + error = vn_io_fault_uiomove((char *)bp->b_data + + blkoffset, (int)xfersize, uio); + } else { + error = vn_io_fault_pgmove(bp->b_pages, blkoffset, + (int)xfersize, uio); + } if (error) break; @@ -700,6 +708,7 @@ ffs_write(ap) flags = seqcount << BA_SEQSHIFT; if ((ioflag & IO_SYNC) && !DOINGASYNC(vp)) flags |= IO_SYNC; + flags |= BA_UNMAPPED; for (error = 0; uio->uio_resid > 0;) { lbn = lblkno(fs, uio->uio_offset); @@ -739,8 +748,13 @@ ffs_write(ap) if (size < xfersize) xfersize = size; - error = vn_io_fault_uiomove((char *)bp->b_data + blkoffset, - (int)xfersize, uio); + if ((bp->b_flags & B_UNMAPPED) == 0) { + error = vn_io_fault_uiomove((char *)bp->b_data + + blkoffset, (int)xfersize, uio); + } else { + error = vn_io_fault_pgmove(bp->b_pages, blkoffset, + (int)xfersize, uio); + } /* * If the buffer is not already filled and we encounter an * error while trying to fill it, we have to clear out any @@ -783,7 +797,8 @@ ffs_write(ap) } else if (xfersize + blkoffset == fs->fs_bsize) { if ((vp->v_mount->mnt_flag & MNT_NOCLUSTERW) == 0) { bp->b_flags |= B_CLUSTEROK; - cluster_write(vp, bp, ip->i_size, seqcount); + cluster_write(vp, bp, ip->i_size, seqcount, + GB_UNMAPPED); } else { bawrite(bp); } diff --git a/sys/ufs/ufs/ufs_extern.h b/sys/ufs/ufs/ufs_extern.h index c590748..31a2ba8 100644 --- a/sys/ufs/ufs/ufs_extern.h +++ b/sys/ufs/ufs/ufs_extern.h @@ -121,6 +121,7 @@ void softdep_revert_rmdir(struct inode *, struct inode *); */ #define BA_CLRBUF 0x00010000 /* Clear invalid areas of buffer. */ #define BA_METAONLY 0x00020000 /* Return indirect block buffer. */ +#define BA_UNMAPPED 0x00040000 /* Do not mmap resulted buffer. */ #define BA_SEQMASK 0x7F000000 /* Bits holding seq heuristic. */ #define BA_SEQSHIFT 24 #define BA_SEQMAX 0x7F diff --git a/sys/vm/pmap.h b/sys/vm/pmap.h index d06c22b..c64a549 100644 --- a/sys/vm/pmap.h +++ b/sys/vm/pmap.h @@ -108,6 +108,8 @@ void pmap_clear_modify(vm_page_t m); void pmap_clear_reference(vm_page_t m); void pmap_copy(pmap_t, pmap_t, vm_offset_t, vm_size_t, vm_offset_t); void pmap_copy_page(vm_page_t, vm_page_t); +void pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, + vm_page_t mb[], vm_offset_t b_offset, int xfersize); void pmap_enter(pmap_t, vm_offset_t, vm_prot_t, vm_page_t, vm_prot_t, boolean_t); void pmap_enter_object(pmap_t pmap, vm_offset_t start, diff --git a/sys/vm/swap_pager.c b/sys/vm/swap_pager.c index 44bff25..10a2c28 100644 --- a/sys/vm/swap_pager.c +++ b/sys/vm/swap_pager.c @@ -758,6 +758,16 @@ swp_pager_strategy(struct buf *bp) TAILQ_FOREACH(sp, &swtailq, sw_list) { if (bp->b_blkno >= sp->sw_first && bp->b_blkno < sp->sw_end) { mtx_unlock(&sw_dev_mtx); + if ((sp->sw_flags & SW_UNMAPPED) != 0) { + bp->b_kvaalloc = bp->b_data; + bp->b_data = unmapped_buf; + bp->b_kvabase = unmapped_buf; + bp->b_offset = 0; + bp->b_flags |= B_UNMAPPED; + } else { + pmap_qenter((vm_offset_t)bp->b_data, + &bp->b_pages[0], bp->b_bcount / PAGE_SIZE); + } sp->sw_strategy(bp, sp); return; } @@ -1155,11 +1165,6 @@ swap_pager_getpages(vm_object_t object, vm_page_t *m, int count, int reqpage) bp = getpbuf(&nsw_rcount); bp->b_flags |= B_PAGING; - /* - * map our page(s) into kva for input - */ - pmap_qenter((vm_offset_t)bp->b_data, m + i, j - i); - bp->b_iocmd = BIO_READ; bp->b_iodone = swp_pager_async_iodone; bp->b_rcred = crhold(thread0.td_ucred); @@ -1371,8 +1376,6 @@ swap_pager_putpages(vm_object_t object, vm_page_t *m, int count, bp->b_flags |= B_PAGING; bp->b_iocmd = BIO_WRITE; - pmap_qenter((vm_offset_t)bp->b_data, &m[i], n); - bp->b_rcred = crhold(thread0.td_ucred); bp->b_wcred = crhold(thread0.td_ucred); bp->b_bcount = PAGE_SIZE * n; @@ -1484,7 +1487,12 @@ swp_pager_async_iodone(struct buf *bp) /* * remove the mapping for kernel virtual */ - pmap_qremove((vm_offset_t)bp->b_data, bp->b_npages); + if ((bp->b_flags & B_UNMAPPED) != 0) { + bp->b_data = bp->b_kvaalloc; + bp->b_kvabase = bp->b_kvaalloc; + bp->b_flags &= ~B_UNMAPPED; + } else + pmap_qremove((vm_offset_t)bp->b_data, bp->b_npages); if (bp->b_npages) { object = bp->b_pages[0]->object; @@ -2144,7 +2152,8 @@ swapon_check_swzone(unsigned long npages) } static void -swaponsomething(struct vnode *vp, void *id, u_long nblks, sw_strategy_t *strategy, sw_close_t *close, dev_t dev) +swaponsomething(struct vnode *vp, void *id, u_long nblks, + sw_strategy_t *strategy, sw_close_t *close, dev_t dev, int flags) { struct swdevt *sp, *tsp; swblk_t dvbase; @@ -2180,6 +2189,7 @@ swaponsomething(struct vnode *vp, void *id, u_long nblks, sw_strategy_t *strateg sp->sw_used = 0; sp->sw_strategy = strategy; sp->sw_close = close; + sp->sw_flags = flags; sp->sw_blist = blist_create(nblks, M_WAITOK); /* @@ -2537,10 +2547,19 @@ swapgeom_strategy(struct buf *bp, struct swdevt *sp) bio->bio_caller2 = bp; bio->bio_cmd = bp->b_iocmd; - bio->bio_data = bp->b_data; bio->bio_offset = (bp->b_blkno - sp->sw_first) * PAGE_SIZE; bio->bio_length = bp->b_bcount; bio->bio_done = swapgeom_done; + if ((bp->b_flags & B_UNMAPPED) != 0) { + bio->bio_ma = bp->b_pages; + bio->bio_data = unmapped_buf; + bio->bio_ma_offset = (vm_offset_t)bp->b_offset & PAGE_MASK; + bio->bio_ma_n = bp->b_npages; + bio->bio_flags |= BIO_UNMAPPED; + } else { + bio->bio_data = bp->b_data; + bio->bio_ma = NULL; + } g_io_request(bio, cp); return; } @@ -2630,9 +2649,9 @@ swapongeom_ev(void *arg, int flags) } nblks = pp->mediasize / DEV_BSIZE; swaponsomething(swh->vp, cp, nblks, swapgeom_strategy, - swapgeom_close, dev2udev(swh->dev)); + swapgeom_close, dev2udev(swh->dev), + (pp->flags & G_PF_ACCEPT_UNMAPPED) != 0 ? SW_UNMAPPED : 0); swh->error = 0; - return; } static int @@ -2721,6 +2740,6 @@ swaponvp(struct thread *td, struct vnode *vp, u_long nblks) return (error); swaponsomething(vp, vp, nblks, swapdev_strategy, swapdev_close, - NODEV); + NODEV, 0); return (0); } diff --git a/sys/vm/swap_pager.h b/sys/vm/swap_pager.h index 5c716d9..79f8767 100644 --- a/sys/vm/swap_pager.h +++ b/sys/vm/swap_pager.h @@ -68,6 +68,7 @@ struct swdevt { sw_close_t *sw_close; }; +#define SW_UNMAPPED 0x01 #define SW_CLOSING 0x04 #ifdef _KERNEL diff --git a/sys/vm/vm.h b/sys/vm/vm.h index 132c10e..106c510 100644 --- a/sys/vm/vm.h +++ b/sys/vm/vm.h @@ -136,6 +136,8 @@ struct kva_md_info { vm_offset_t clean_eva; vm_offset_t pager_sva; vm_offset_t pager_eva; + vm_offset_t bio_transient_sva; + vm_offset_t bio_transient_eva; }; extern struct kva_md_info kmi; diff --git a/sys/vm/vm_init.c b/sys/vm/vm_init.c index c507691..2eb1070 100644 --- a/sys/vm/vm_init.c +++ b/sys/vm/vm_init.c @@ -186,10 +186,15 @@ again: panic("startup: table size inconsistency"); clean_map = kmem_suballoc(kernel_map, &kmi->clean_sva, &kmi->clean_eva, - (long)nbuf * BKVASIZE + (long)nswbuf * MAXPHYS, TRUE); + (long)nbuf * BKVASIZE + (long)nswbuf * MAXPHYS + + (long)bio_transient_maxcnt * MAXPHYS, TRUE); buffer_map = kmem_suballoc(clean_map, &kmi->buffer_sva, &kmi->buffer_eva, (long)nbuf * BKVASIZE, FALSE); buffer_map->system_map = 1; + bio_transient_map = kmem_suballoc(clean_map, &kmi->bio_transient_sva, + &kmi->bio_transient_eva, (long)bio_transient_maxcnt * MAXPHYS, + FALSE); + bio_transient_map->system_map = 1; pager_map = kmem_suballoc(clean_map, &kmi->pager_sva, &kmi->pager_eva, (long)nswbuf * MAXPHYS, FALSE); pager_map->system_map = 1; diff --git a/sys/vm/vm_kern.c b/sys/vm/vm_kern.c index ad9aa0d..efd2bf2 100644 --- a/sys/vm/vm_kern.c +++ b/sys/vm/vm_kern.c @@ -85,11 +85,12 @@ __FBSDID("$FreeBSD$"); #include #include -vm_map_t kernel_map=0; -vm_map_t kmem_map=0; -vm_map_t exec_map=0; +vm_map_t kernel_map; +vm_map_t kmem_map; +vm_map_t exec_map; vm_map_t pipe_map; -vm_map_t buffer_map=0; +vm_map_t buffer_map; +vm_map_t bio_transient_map; const void *zero_region; CTASSERT((ZERO_REGION_SIZE & PAGE_MASK) == 0); diff --git a/sys/vm/vnode_pager.c b/sys/vm/vnode_pager.c index a6d78f4..86ca7b4 100644 --- a/sys/vm/vnode_pager.c +++ b/sys/vm/vnode_pager.c @@ -697,6 +697,7 @@ vnode_pager_generic_getpages(vp, m, bytecount, reqpage) int runpg; int runend; struct buf *bp; + struct mount *mp; int count; int error; @@ -899,12 +900,23 @@ vnode_pager_generic_getpages(vp, m, bytecount, reqpage) } bp = getpbuf(&vnode_pbuf_freecnt); - kva = (vm_offset_t) bp->b_data; + kva = (vm_offset_t)bp->b_data; /* - * and map the pages to be read into the kva + * and map the pages to be read into the kva, if the filesystem + * requires mapped buffers. */ - pmap_qenter(kva, m, count); + mp = vp->v_mount; + if (mp != NULL && (mp->mnt_kern_flag & MNTK_UNMAPPED_BUFS) != 0) { + bp->b_data = unmapped_buf; + bp->b_kvabase = unmapped_buf; + bp->b_offset = 0; + bp->b_flags |= B_UNMAPPED; + bp->b_npages = count; + for (i = 0; i < count; i++) + bp->b_pages[i] = m[i]; + } else + pmap_qenter(kva, m, count); /* build a minimal buffer header */ bp->b_iocmd = BIO_READ; @@ -933,11 +945,22 @@ vnode_pager_generic_getpages(vp, m, bytecount, reqpage) if ((bp->b_ioflags & BIO_ERROR) != 0) error = EIO; - if (!error) { - if (size != count * PAGE_SIZE) - bzero((caddr_t) kva + size, PAGE_SIZE * count - size); + if (error != 0 && size != count * PAGE_SIZE) { + if ((bp->b_flags & B_UNMAPPED) != 0) { + bp->b_flags &= ~B_UNMAPPED; + pmap_qenter(kva, m, count); + } + bzero((caddr_t)kva + size, PAGE_SIZE * count - size); + } + if ((bp->b_flags & B_UNMAPPED) == 0) + pmap_qremove(kva, count); + if (mp != NULL && (mp->mnt_kern_flag & MNTK_UNMAPPED_BUFS) != 0) { + bp->b_data = (caddr_t)kva; + bp->b_kvabase = (caddr_t)kva; + bp->b_flags &= ~B_UNMAPPED; + for (i = 0; i < count; i++) + bp->b_pages[i] = NULL; } - pmap_qremove(kva, count); /* * free the buffer header back to the swap buffer pool -- Test scenario: linger4.sh