- These functions need the page queues lock: * means I have a patch to remove the page queues lock dependency for that function. ** means the function really needs the page queues lock because it really does interact with the page queues. pmap_enter pmap_allocpte wire_count _pmap_allocpte vm_page_alloc ** vm_page_select_cache ** pmap_unuse_pt pmap_unwire_pte_hold wire_count _pmap_unwire_pte_hold wire_count vm_page_free_zero ** wire_count pmap_remove_entry pv_list * free_pv_entry vm_page_free ** pmap_insert_entry pv_list * get_pv_entry pagedaemon_wakeup ? vm_page_alloc pmap_collect vm_page_flag_set pmap_enter_quick pmap_enter_quick_locked wire_count _pmap_alloc_pte pmap_unwire_pte_hold pmap_try_insert_pv_entry pv_list * get_pv_entry pmap_remove pmap_remove_page pmap_remove_pte pmap_remove_entry pmap_unuse_pt pmap_remove_pte pmap_unuse_pt pmap_remove_all pv_list * vm_page_flag_set pmap_unuse_pt free_pv_entry pmap_protect pmap_remove vm_page_flag_set pmap_copy pmap_allocpde _pmap_allocpte pmap_try_insert_pv_entry pmap_unwire_pte_hold pte_allocpte pmap_extract_and_hold * vm_page_hold * pmap_page_exists_quick * pv_list * pmap_remove_pages pv_list * pmap_unuse_pt vm_page_free pmap_is_modified * pv_list * pmap_remove_write * pv_list * vm_page_flag_clear pmap_ts_referenced * pv_list * pmap_clear_modify * pv_list * pmap_clear_reference * pv_list * pmap_mincore pmap_is_modified * pmap_ts_referenced * vm_page_flag_set pmap_collect pv_list * free_pv_entry vm_page_free ** pmap_unuse_pt pmap_unwire_pte_hold wire_count _pmap_unwire_pte_hold wire_count vm_page_free_zero ** - The vm page queues lock is the second hottest lock in the kernel after sched_lock. - vm page queues lock needs to be acquired before pmap lock Plan to make pmap not use it as much: - Make vm_page_flag_set/clear just use atomic operations to get rid of the page queues lock dependency. Is this safe? - vm_page_hold and vm_page_unhold can be made not acquire the queues lock in the common case. DONE - Add a mutex pool for vm pages to protect the pv entries lists. IN PROGRESS Makes struct pv_entry larger because it needs to store a pointer to the pte in it. - We can change pmap_unuse_pt and free_pv_entry to just mark the pages they want to free in an array allocated by the caller. The caller will then free those pages after it drops the pmap lock. This way, pmap_remove can be mostly without queues lock. - It should be possible to make vm_page->wired_count use atomic operations instead of needing a lock. Tricky - Once the above is done, it should be possible to: - Pre-allocate a pv chunk early in pmap_enter, if there are no free ones. - Drop the page queues lock immediately after the pmap_allocpte in pmap_enter.