Index: config/chapter.sgml
===================================================================
RCS file: /home/ncvs/doc/en_US.ISO8859-1/books/handbook/config/chapter.sgml,v
retrieving revision 1.81
diff -u -r1.81 chapter.sgml
--- config/chapter.sgml 2003/01/12 18:31:43 1.81
+++ config/chapter.sgml 2003/01/13 01:24:56
@@ -1251,6 +1251,73 @@
experiment to find out.
+
+ vfs.write_behind
+
+
+ vfs.write_behind
+
+
+ The vfs.write_behind sysctl variable
+ defaults to 1 (on). This tells the file system
+ to issue media writes as full clusters are collected, which
+ typically occurs when writing large sequential files. The idea
+ is to avoid saturating the buffer cache with dirty buffers when
+ it would not benefit I/O performance. However, this may stall
+ processes and under certain cirucstances you may wish to turn it
+ off.
+
+
+
+ vfs.hirunningspace
+
+
+ vfs.hirunningspace
+
+
+ The vfs.hirunningspace sysctl variable
+ determines how much outstanding write I/O may be queued to disk
+ controllers system-wide at any given instance. The default is
+ usually sufficient but on machines with lots of disks you may
+ want to bump it up to four or five megabytes.
+ Note that setting too high a value (exceeding the buffer cache's
+ write threshold) can lead to extremely bad clustering
+ performance. Do not set this value arbitrarily high! Higher
+ write values may add latency to reads occuring at the same time.
+
+
+ There are various other buffer-cache and VM page cache
+ related sysctls. We do not recommend modifying these values. As
+ of FreeBSD 4.3, the VM system does an extremely good job of
+ automatically tuning itself.
+
+
+
+ vm.swap_idle_enabled
+
+
+ vm.swap_idle_enabled
+
+
+ The vm.swap_idle_enabled sysctl variable
+ is useful in large multi-user systems where you have lots of
+ users entering and leaving the system and lots of idle processes.
+ Such systems tend to generate a great deal of continuous pressure
+ on free memory reserves. Turning this feature on and tweaking
+ the swapout hysteresis (in idle seconds) via
+ vm.swap_idle_threshold1 and
+ vm.swap_idle_threshold2 allows you to depress
+ the priority of memory pages associated with idle processes more
+ quickly then the normal pageout algorithm. This gives a helping
+ hand to the pageout daemon. do not turn this option on unless
+ you need it, because the tradeoff you are making is essentially
+ pre-page memory sooner rather than later; thus eating more swap
+ and disk bandwidth. In a small system this option will have a
+ determinal effect but in a large system that is already doing
+ moderate paging this option allows the VM system to stage whole
+ processes into and out of memory easily.
+
+
hw.ata.wc
@@ -1279,6 +1346,26 @@
For more information, please see &man.ata.4;.
+
+
+
+ (kern.cam.scsi_delay)
+
+
+
+ kern.cam.scsi_delay
+
+
+ The kernel config may be used to
+ reduce system boot times. The defaults are fairly high and can be
+ responsible for 15+ seconds of delay in the
+ boot process. Reducing it to 5 seconds usually
+ works (especially with modern drives). Newer versions of FreeBSD
+ (5.0+) should use the kern.cam.scsi_delay
+ boot time tunable. The tunable, and kernel config option accept
+ values in terms of milliseconds and NOT
+ seconds
+
@@ -1508,13 +1595,34 @@
your system.
+
+
+ kern.ipc.somaxconn
+
+
+ kern.ipc.somaxconn
+
+
+ The kern.ipc.somaxconn sysctl variable
+ limits the size of the listen queue for accepting new TCP
+ connections. The default value of 128 is
+ typically too low for robust handling of new connections in a
+ heavily loaded web server environment. For such environments, it
+ is recommended to increase this value to 1024 or
+ higher. The service daemon may itself limit the listen queue size
+ (e.g. &man.sendmail.8;, or Apache) but
+ will often have a directive in it's configuration file to adjust
+ the queue size. Large listen queues also do a better job of
+ avoiding Denial of Service DoS attacks.
+
+
Network LimitsThe kernel configuration
- option dictates the amount of network mbufs available to the
- system. A heavily-trafficked server with a low number of MBUFs
+ option dictates the amount of network Mbufs available to the
+ system. A heavily-trafficked server with a low number of Mbufs
will hinder FreeBSD's ability. Each cluster represents
approximately 2 K of memory, so a value of 1024 represents 2
megabytes of kernel memory reserved for network buffers. A
@@ -1523,7 +1631,106 @@
simultaneous connections, and each connection eats a 16 K receive
and 16 K send buffer, you need approximately 32 MB worth of
network buffers to cover the web server. A good rule of thumb is
- to multiply by 2, so 2x32 MB / 2 KB = 64 MB / 2 kB = 32768.
+ to multiply by 2, so 2x32 MB / 2 KB = 64 MB / 2 kB = 32768. We recommend values between 4096 and
+ 32768 for machines with greater amounts of memory. Under no
+ circumstances should you specify an arbitrarily high value for this
+ parameter as it could lead to a boot time crash. The
+ option to &man.netstat.1; may be used to
+ observe network cluster use.
+
+ kern.ipc.nmbclusters loader tunable should
+ be used to tune this at boot time. Only older versions of FreeBSD
+ will require you to use the kernel
+ &man.config.8; option.
+
+ Under extreme circumstances, you may need
+ to modify kern.ipc.nsfbufs sysctl. This sysctl
+ variable controls the number of filesystem buffers &man.sendfile.2;
+ is allowed to use for performing it's work. This parameter nominally
+ scales with kern.maxusers so you should not need
+ to modify it.
+
+
+ net.inet.ip.portrange.*
+
+
+ net.inet.ip.portrange.*
+
+
+ The net.inet.ip.portrange.* sysctl
+ variables control the port number ranges automatically bound to TCP
+ and UDP sockets. There are three ranges: a low range, a default
+ range, and a high range. Most network programs use the default
+ range which is controlled by the
+ net.inet.ip.portrange.first and
+ net.inet.ip.portrange.last, which default to
+ 1024 and 5000, respectively. Bound port ranges are used for
+ outgoing connections, and it is possible to run the system out of
+ ports under certain circumstances. This most commonly occurs
+ when you are running a heavily loaded web proxy. The port range
+ is not an issue when running servers which handle mainly incoming
+ connections, such as a normal web server, or has a limited number
+ of outgoing connections, such as a mail relay. For situations
+ where you may run yourself out of ports, it is recommended to
+ increase net.inet.ip.portrange.last modestly.
+ A value of 10000, 20000 or
+ 30000 may be reasonable. You should also
+ consider firewall effects when changing the port range. Some
+ firewalls may block large ranges of ports (usually low-numbered
+ ports) and expect systems to use higher ranges of ports for
+ outgoing connections -- for this reason it is recommended that
+ net.inet.ip.portrange.first be lowered.
+
+
+
+ TCP Bandwidth Delay Product
+
+
+ TCP Bandwidth Delay Product Limiting
+ net.inet.tcp.inflight_enable
+
+
+ The TCP Bandwidth Delay Product Limiting is similar to
+ TCP/Vegas in NetBSD. It can be
+ enabled by setting net.inet.tcp.inflight_enable
+ sysctl variable to 1. The system will attempt
+ to calculate the bandwidth delay product for each connection and
+ limit the amount of data queued to the network to just the amount
+ required to maintain optimum throughput.
+
+ This feature is useful if you are serving data over modems,
+ Gigabit Ethernet, or even high speed WAN links (or any other link
+ with a high bandwidth delay product), especially if you are also
+ using window scaling or have configured a large send window. If
+ you enable this option, you should also be sure to set
+ net.inet.tcp.inflight_debug to
+ 0 (disable debugging), and for production use
+ setting net.inet.tcp.inflight_min to at least
+ 6144 may be beneficial. Note however, that
+ setting high minimums may effectively disable bandwidth limiting
+ depending on the link. The limiting feature reduces the amount of
+ data built up in intermediate route and switch packet queues as
+ well as reduces the amount of data built up in the local host's
+ interface queue. With fewer packets queued up, interactive
+ connections, especially over slow modems, will also be able to
+ operate with lower Round Trip Times. However,
+ note that this feature only effects data transmission (uploading
+ / server side). It has no effect on data reception (downloading).
+
+
+ Adjusting net.inet.tcp.inflight_stab is
+ not recommended. This parameter defaults to
+ 20, representing 2 maximal packets added to the bandwidth delay
+ product window calculation. The additional window is required to
+ stabilize the algorithm and improve responsiveness to changing
+ conditions, but it can also result in higher ping times over slow
+ links (though still much lower than you would get without the
+ inflight algorithm). In such cases, you may wish to try reducing
+ this parameter to 15, 10, or 5; and may also have to reduce
+ net.inet.tcp.inflight_min (for example, to
+ 3500) to get the desired effect. Reducing these parameters
+ should be done as a last resort only.
+