From 550412cb18391341cd5e52c9bbf678a5dc55f1f5 Mon Sep 17 00:00:00 2001 From: Mark Johnston Date: Tue, 8 Mar 2016 21:28:31 -0800 Subject: [PATCH 6/6] Run dirty pages through the inactive queue before the laundry queue. This change modifies the active queue aging scan to place dirty pages in the inactive queue rather than the laundry queue. vm_page_advise() is modified similarly. One side effect of this change is that dirty pages are given more time to be reactivated instead of being paged out. However, this is not its primary motivation. Pushing dirty pages through the inactive queue establishes a temporal relationship between the inactive and laundry queues and results in a more cohesive LRU page replacement mechanism. In the absence of this change, the pagedaemon and laundry threads have little information about each other's activity, and we can say little about the relative ages of the pages at the beginning of the inactive and laundry queues at a given point in time. However, in general we would prefer to avoid laundering a given page until some number of clean pages have been reclaimed in an attempt to satisfy a page shortage. The proposed laundering policy uses the ratio of dirty to clean inactive pages to set a minimum threshold for laundering, but without a shared queue, we cannot be satisfied that all less-recently-used clean pages have been reclaimed before deciding to launder a dirty page. A downside to this change is that the pagedaemons are forced to expend more CPU cycles handling dirty pages, as they must now examine dirty pages twice. However, this cost is small relative to that of a swap pageout. It may be reasonable to move vnode-backed pages directly to the laundry queue, but this change aims to minimize differences in behaviour with respect to FreeBSD head/. --- sys/vm/vm_page.c | 13 ++++++------- sys/vm/vm_pageout.c | 30 +++++++++++++++++++++++++----- 2 files changed, 31 insertions(+), 12 deletions(-) diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c index 5cfc352..29b968d 100644 --- a/sys/vm/vm_page.c +++ b/sys/vm/vm_page.c @@ -3300,14 +3300,13 @@ vm_page_advise(vm_page_t m, int advice) vm_page_dirty(m); /* - * Place clean pages near the head of the inactive queue rather than the - * tail, thus defeating the queue's LRU operation and ensuring that the - * page will be reused quickly. + * Place clean pages near the head of the inactive queue rather than + * the tail, thus defeating the queue's LRU operation and ensuring that + * the page will be reused quickly. Dirty pages are given a chance to + * cycle once through the inactive queue before becoming eligible for + * laundering. */ - if (m->dirty == 0) - _vm_page_deactivate(m, TRUE); - else - vm_page_launder(m); + _vm_page_deactivate(m, m->dirty == 0); } /* diff --git a/sys/vm/vm_pageout.c b/sys/vm/vm_pageout.c index 9854569..6e6174d 100644 --- a/sys/vm/vm_pageout.c +++ b/sys/vm/vm_pageout.c @@ -1603,15 +1603,35 @@ drop_page: /* Dequeue to avoid later lock recursion. */ vm_page_dequeue_locked(m); #if 0 + /* + * This requires the object write lock. It might be a + * good idea during a page shortage, but might also + * cause contention with a concurrent attempt to launder + * pages from this object. + */ if (m->object->ref_count != 0) vm_page_test_dirty(m); #endif - if (m->dirty == 0) { + /* + * When not short for inactive pages, let dirty pages go + * through the inactive queue before moving to the + * laundry queues. This gives them some extra time to + * be reactivated, potentially avoiding an expensive + * pageout. During a page shortage, the inactive queue + * is necessarily small, so we may move dirty pages + * directly to the laundry queue. + */ + if (page_shortage <= 0) vm_page_deactivate(m); - page_shortage -= act_scan_laundry_weight; - } else { - vm_page_launder(m); - page_shortage--; + else { + if (m->dirty == 0) { + vm_page_deactivate(m); + page_shortage -= + act_scan_laundry_weight; + } else { + vm_page_launder(m); + page_shortage--; + } } } else vm_page_requeue_locked(m); -- 2.7.2