You are on page 1of 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Virtual Memory Management


SCS2001 - Operating Systems

Viraj Brian Wijesuriya


University of Colombo School of Computing, Sri Lanka

Viraj Brian Wijesuriya

Virtual Memory Management

1 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Entire program terminology (1)


It is not required that an entire program be in memory always. Programs often have code to handle unusual error conditions and since these errors are seldom, if ever, occur in practice, this code is almost never executed. Arrays,lists, and tables are often allocated more memory than they actually need thus for example, an array may be declared 100 by 100 elements, even though it is seldom larger than 10 by 10 elements.

Viraj Brian Wijesuriya

Virtual Memory Management

2 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Entire program terminology (2)

Certain options and features of a program may be used rarely, for instance, the routines on U.S. government computers that balance the budget have not been used in many years.

Even in those cases where the entire program is needed, it may not all be needed at the same time.

Viraj Brian Wijesuriya

Virtual Memory Management

3 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Benets of virtual memory


A program would no longer be constrained by the amount of physical memory that is available thus, users would be able to write programs for an extremely large virtual address space, simplifying the programming task. Less I/O would be needed to load or swap user programs into memory, so each user program would run faster. Because each user program could take less physical memory, more programs could be run at the same time, with a corresponding increase in CPU utilization and throughput but with no increase in response time or turnaround time. Running a program that is not entirely in memory would benet both the system and the user.
Viraj Brian Wijesuriya Virtual Memory Management 4 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Role of virtual memory


Virtual memory involves the separation of logical memory as perceived by users from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available. Virtual memory makes the task of programming much easier, because the programmer no longer needs to worry about the amount of physical memory available; he can concentrate instead on the problem to be programmed. The virtual address space of a process refers to the logical (or virtual) view of how a process is stored in memory.
Viraj Brian Wijesuriya Virtual Memory Management 5 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Stack and heap


The heap is allowed to grow upward in memory as it is used for dynamic memory allocation and similarly, we allow for the stack to grow downward in memory through successive function calls. The large blank space (or hole) between the heap and the stack is part of the virtual address space but will require actual physical pages only if the heap or stack grows. Virtual address spaces that include holes are known as sparse address spaces. A sparse address space is benecial because the holes can be lled as the stack or heap segments grow or if we wish to dynamically link libraries (or possibly other shared objects) during program execution.
Viraj Brian Wijesuriya Virtual Memory Management 6 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Growing stack and heap

Viraj Brian Wijesuriya

Virtual Memory Management

7 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Importance of Virtual Memory

Other benets of virtual memory


In addition to separating logical memory from physical memory, virtual memory allows les and memory to be shared by two or more processes through page sharing. System libraries can be shared by several processes through mapping of the shared object into a virtual address space. Virtual memory enables processes to share memory. Virtual memory can allow pages to be shared during process creation with the fork() system call (fork() with copy-on-write) thus speeding up process creation.
Viraj Brian Wijesuriya Virtual Memory Management 8 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Overview
With demand-paged virtual memory, pages are only loaded when they are demanded during program execution. Pages that are never accessed are thus never loaded into physical memory. A demand-paging system is similar to a paging system with swapping where processes reside in secondary memory (usually a disk). A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process, thus we use pager, rather than swapper, in connection with demand paging.
Viraj Brian Wijesuriya Virtual Memory Management 9 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Transfer of a paged memory to contiguous disk space

Viraj Brian Wijesuriya

Virtual Memory Management

10 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Protection
We need some form of hardware support to distinguish between the pages that are in memory and the pages that are on the disk. The valid -invalid bit scheme can be used for this purpose. When this bit is set to valid the associated page is both legal and in memory. If the bit is set to invalid, the page either is not valid (that is, not in the logical address space of the process) or is valid but is currently on the disk. While the process executes and accesses pages that are memory resident, execution proceeds normally.
Viraj Brian Wijesuriya Virtual Memory Management 11 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Valid - invalid bit

Viraj Brian Wijesuriya

Virtual Memory Management

12 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Steps in handling a page fault

Viraj Brian Wijesuriya

Virtual Memory Management

13 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Pure demand paging


We can start executing a process with no pages in memory. When the operating system sets the instruction pointer to the rst instruction of the process, which is on a none-memory-resident page, the process immediately faults for the page. After this page is brought into memory, the process continues to execute, faulting as necessary until every page that it needs is in memory. At that, it can execute with no more faults and this scheme is called pure demand paging: never bring a page into memory until it is required.
Viraj Brian Wijesuriya Virtual Memory Management 14 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Locality of reference
Some programs could access several new pages of memory with each instruction execution (one page for the instruction and many for data), possibly causing multiple page faults per instruction. This situation would result in unacceptable system performance. Fortunately, analysis of running processes shows that this behaviour is exceedingly unlikely. Programs tend to have locality of reference, which results in reasonable performance from demand paging.
Viraj Brian Wijesuriya Virtual Memory Management 15 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Hardware support
The hardware to support demand paging is the same as the hardware for paging and swapping: Page table: This table has the ability to mark an entry invalid through a valid -invalid bit or a special value of protection bits. Secondary memory: This memory holds those pages that are not present in main memory and it is usually a high-speed disk and it is known as the swap device, and the section of disk used for this purpose is known as Swap-space.

Viraj Brian Wijesuriya

Virtual Memory Management

16 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Restarting an instruction
When the page fault occurs, we must be able to restart the process in exactly the same place and state, except that the desired page is now in memory and is accessible. A page fault may occur at any memory reference and if the page fault occurs on the instruction fetch, we can restart by fetching the instruction again or if a page fault occurs while we are fetching an operand, we must fetch and decode the instruction again and then fetch the operand. The major diculty arises when one instruction may modify several dierent locations.
Viraj Brian Wijesuriya Virtual Memory Management 17 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

When we cannot simply restart the instruction


In one solution, the microcode computes and attempts to access both ends of both blocks (source and destination blocks overlapping situation) and if a page fault is going to occur, it will happen at this step, before anything is modied. The move can then take place; we know that no page fault can occur, since all the relevant pages are in memory. The other solution uses temporary registers to hold the values of overwritten locations and if there is a page fault, all the old values are written back into memory before the trap occurs and this action restores memory to its state before the instruction was started, so that the instruction can be repeated.
Viraj Brian Wijesuriya Virtual Memory Management 18 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Eective access time (1)

Demand paging can signicantly aect the performance of a computer system. Let p be the probability of a page fault and let ma be the memory access time. The eective access time is then: eective access time = (1 - p ) ma + p page fault time.

Viraj Brian Wijesuriya

Virtual Memory Management

19 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Eective access time (2)


To compute the eective access time, we must know how much time is needed to service a page fault. In any case, we are faced with three major components of the page-fault service time: Service the page-fault interrupt, Read in the page, Restart the process. The rst and third tasks (service the page-fault interrupt, restart the process) can be reduced, with careful coding, to several hundred instructions. If a queue of processes is waiting for the device, we have to add device-queueing time as we wait for the paging device to be free to service our request, increasing even more the time to swap.
Viraj Brian Wijesuriya Virtual Memory Management 20 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Calculating eective access time (1)

With an average page-fault service time of 8 milliseconds and a memory access time of 200 nanoseconds, the eective access time in nanoseconds is:

Viraj Brian Wijesuriya

Virtual Memory Management

21 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Calculating eective access time (2)


We see, then, that the eective access time is directly proportional to the page fault rate. If one access out of 1,000 causes a page fault, the eective access time is 8.2 microseconds and the computer will be slowed down by a factor of 40 because of demand paging. If we want performance degradation to be less than 10 percent, we need :

Viraj Brian Wijesuriya

Virtual Memory Management

22 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Swap space management


Disk I/O to swap space is generally faster than that to the le system. It is faster because swap space is allocated in much larger blocks, and le lookups and indirect allocation methods are not used. The system can therefore gain better paging throughput by copying an entire le image into the swap space at process startup and then performing demand paging from the swap space. Another option is to demand pages from the le system initially but to write the pages to swap space as they are replaced and this will ensure that only needed pages are read from the le system but that all subsequent paging is done from swap space.
Viraj Brian Wijesuriya Virtual Memory Management 23 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Limiting swap space


Some systems attempt to limit the amount of swap space used through demand paging of binary les and demand pages for such les are brought directly from the le system. However, when page replacement is called for, these frames can simply be overwritten (because they are never modied), and the pages can be read in from the le system again if needed. Using this approach, the le system itself serves as the backing store but the swap space must still be used for pages not associated with a le; these pages include the stack and heap for a process. This method appears to be a good compromise and is used in several systems, including Solaris and BSD UNIX.
Viraj Brian Wijesuriya Virtual Memory Management 24 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Terminology (1)

Traditionally, fork() worked by creating a copy of the parents address space for the child, duplicating the pages belonging to the parent. However, considering that many child processes invoke the exec() system call immediately after creation, the copying of the parents address space may be unnecessary.

Viraj Brian Wijesuriya

Virtual Memory Management

25 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Terminology (2)

Instead, we can use a technique known as copy-on-write (COW) which works by allowing the parent and child processes initially to share the same pages and these shared pages are marked as copy-on-write pages, meaning that if either process writes to a shared page, a copy of the shared page is created. When the copy-on-write technique is used, only the pages that are modied by either process are copied, all unmodied pages can be shared by the parent and child processes.

Viraj Brian Wijesuriya

Virtual Memory Management

26 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Example
Copy-on-write is a common technique used by several operating systems, including Windows XP, Linux, and Solaris.

Viraj Brian Wijesuriya

Virtual Memory Management

27 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Free page pool


When it is determined that a page is going to be duplicated using copy-on-write, it is important to note the location from which the free page will be allocated. Many operating systems provide a pool of free pages for such requests and free pages are typically allocated when the stack or heap for a process must expand or when there are copy-on-write pages to be managed. Operating systems typically allocate these pages using a technique known as zero-ll-on-demand pages. Zero-ll-on-demand pages have been zeroed-out before being allocated, thus erasing the previous contents.
Viraj Brian Wijesuriya Virtual Memory Management 28 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

vfork() (1)
Several versions of UNIX (including Solaris and Linux) provide a variation of the fork() system call: vfork() (for virtual memory fork). With vfork(), the parent process is suspended, and the child process uses the address space of the parent. Because vfork() does not use copy-on-write, if the child process changes any pages of the parents address space, the altered pages will be visible to the parent once it resumes. Therefore, vfork() must be used with caution to ensure that the child process does not modify the address space of the parent.
Viraj Brian Wijesuriya Virtual Memory Management 29 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

vfork() (2)
vfork() is intended to be used when the child process calls exec() immediately after creation. Because no copying of pages takes place, vfork() is an extremely ecient method of process creation and is sometimes used to implement UNIX command-line shell interfaces. vfork() simply creates a new process and share virtual memory. vfork() do not fully copy the address space of the parent process, but borrows the parents memory and thread of control.
Viraj Brian Wijesuriya Virtual Memory Management 30 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Concepts Performance of Demand Paging Copy-on-write

Major problems in implementing demand paging


We must solve two major problems to implement demand paging: develop a frame-allocation algorithm and a page-replacement algorithm. If we have multiple processes in memory, we must decide how many frames to allocate to each process; and when page replacement is required and we must select the frames that are to be replaced. Designing appropriate algorithms to solve these problems is an important task, because disk I/O is so expensive. Slight improvements in demand-paging methods yield large gains in system performance.
Viraj Brian Wijesuriya Virtual Memory Management 31 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Overview
When a page fault occurs, the operating system has to choose a page to evict (remove from memory) to make room for incoming page. If the page removed has been modied while in memory, it must be rewritten to the disk to bring the disk copy up to date. If the page has not been changed, the disk copy is already up to date, rewrite is not needed. The page to be read in just overwrites the page being evicted.
Viraj Brian Wijesuriya Virtual Memory Management 32 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Picking a page to replace


While a user process is executing, a page fault occurs then the operating system determines where the desired page is residing on the disk but then nds that there are no free frames on the free-frame list. While it would be possible to pick a random page to evict at each page fault, system performance is much better if a page that is not heavily used is chosen. If a heavily used page is removed, it will probably have to be brought back in quickly, resulting in extra overhead. Much work has been done on the subject of page replacement algorithms, both theoretical and experimental.
Viraj Brian Wijesuriya Virtual Memory Management 33 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Page-fault routine with page replacement

Viraj Brian Wijesuriya

Virtual Memory Management

34 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Page-faults versus number of frames


As the number of frames increases, the number of page faults drops to some minimal level.

Viraj Brian Wijesuriya

Virtual Memory Management

35 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

List of page replacement algorithms


The optimal page replacement algorithm. Not recently used page replacement algorithm. FIFO page replacement algorithm. The second chance page replacement algorithm. Clock page replacement algorithm. Working set page replacement algorithm. LRU page replacement algorithm. Aging page replacement algorithm.

Viraj Brian Wijesuriya

Virtual Memory Management

36 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Overview
The least recently used page replacement algorithm (LRU) is based on the observation that pages that have been heavily used in the last few instructions will probably be heavily used again in the next few. When a page fault occurs, throw out the page that has been unused for the longest time. Although LRU is theoretically realizable, it is not cheap. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory, with the most recently used page at the front and the least recently used page at the rear.
Viraj Brian Wijesuriya Virtual Memory Management 37 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

LRU implementation

However, there are other ways to implement LRU with special hardware: Using a counter. Using a stack. Using a matrix.

Viraj Brian Wijesuriya

Virtual Memory Management

38 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Using a counter
With a 64-bit counter, C, that is automatically incremented after each instruction. Furthermore, each page table entry must also have a eld large enough to contain the counter. After each memory reference, the current value of C is stored in the page table entry for the page just referenced. When a page fault occurs, the operating system examines all the counters in the page table to nd the lowest one and that page is the least recently used.
Viraj Brian Wijesuriya Virtual Memory Management 39 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

LRU example by counter

Viraj Brian Wijesuriya

Virtual Memory Management

40 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Using a stack
Another approach to implementing LRU replacement is to keep a stack of page numbers. Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most recently used page is always at the top of the stack and the least recently used page is always at the bottom. Because entries must be removed from the middle of the stack, it is best to implement this approach by using a doubly linked list with a head pointer and a tail pointer. The tail pointer points to the bottom of the stack, which is the LRU page.
Viraj Brian Wijesuriya Virtual Memory Management 41 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

LRU example by stack

Viraj Brian Wijesuriya

Virtual Memory Management

42 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Using a matrix
For a machine with n page frames, the LRU hardware can maintain a matrix of nn bits, initially all zero. Whenever page frame k is referenced, the hardware rst sets all the bits of row k to 1, then sets all the bits of column k to 0. At any instant of time, the row whose binary value is lowest is the least recently used. The row whose value is next lowest is next least recently used, and so forth.
Viraj Brian Wijesuriya Virtual Memory Management 43 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

LRU example by matrix

Viraj Brian Wijesuriya

Virtual Memory Management

44 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Overview
Most processes exhibit a locality of reference even they are using demand paging. Meaning that during any phase of execution, the process references only a relatively small fraction of its pages. The set of pages that a process is currently using is known as its working set. If the entire working set is in memory, the process will run without causing many faults but if the available memory is too small to hold the entire working set, the process will cause many page faults and run slowly.
Viraj Brian Wijesuriya Virtual Memory Management 45 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Thrashing
If the process does not have the number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it must replace some page but since all its pages are in active use, it must replace a page that will be needed again right away. Consequently, it quickly faults again, and again, and again, replacing pages that it must back in immediately. This high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing.
Viraj Brian Wijesuriya Virtual Memory Management 46 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Preventing thrashing with WS model (1)


To prevent thrashing, we must provide a process with as many frames as it needs but how do we know how many frames it needs? The working-set strategy starts by looking at how frames a process is actually using and this approach denes the locality of process execution. The locality model states that, as a process executes, it moves from locality to locality: A locality is a set of pages that are actively used together. A program is generally composed of several dierent localities, which may overlap.
Viraj Brian Wijesuriya Virtual Memory Management 47 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Preventing thrashing with WS model (2)


Many paging systems try to keep track of each process working set and make sure that it is in memory before letting the process run. This approach is called the working set model. It is designed to greatly reduce the page fault rate. Loading the pages before letting processes run is called prepaging and the working set of a process changes over time (change of locality).

Viraj Brian Wijesuriya

Virtual Memory Management

48 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Preventing thrashing with WS model (3)

If the whole working set is not in memory, a particular process is not allowed to run. A process may be suspended due to this. It is impossible to know when a working set might change or what is the working set window size should be practically. Thus, this method is not frequently used.

Viraj Brian Wijesuriya

Virtual Memory Management

49 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Preventing thrashing with PFF (1)


Thrashing has a high page-fault rate and thus we want to control the page-fault rate. When it is too high, we know that the process needs more frames and if the page-fault rate is too low, then the process may have too many frames. We can establish upper and lower bounds on the desired page-fault rate: If the actual page-fault rate exceeds the upper limit, we allocate the process another frame. If the page-fault rate falls below the lower limit, we remove a frame from the process. Thus, we can directly measure and control the page-fault rate to prevent thrashing and this strategy uses Page Fault Frequency (PFF).
Viraj Brian Wijesuriya Virtual Memory Management 50 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm

Preventing thrashing with PFF (2)

As with the working-set strategy, we may have to suspend a process. If the page-fault rate increases and no free frames are available, we must select some process and suspend it. The freed frames are then distributed to processes with high page-fault rates.

Viraj Brian Wijesuriya

Virtual Memory Management

51 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Introduction

Overview
Up to now, it is assumed that virtual memory is transparent to processes and programmers. That is, all they see is a large virtual address space on a computer with a smaller physical memory. With many systems, that is true. But in some advanced systems, programmers have some control over the memory map and can use it in non-traditional ways to enhance program behaviour.

Viraj Brian Wijesuriya

Virtual Memory Management

52 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Introduction

Reasons for virtual memory interface (1)


One reason for giving programmers control over their memory map is to allow two or more processes to share the same memory. If programmers can name regions of their memory, it may be possible for one process to give another process the name of a memory region so that process can also map it in. With two (or more) processes sharing the same pages, high bandwidth sharing becomes possible. One process writes into the shared memory and another one reads from it.
Viraj Brian Wijesuriya Virtual Memory Management 53 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Introduction

Reasons for virtual memory interface (2)


Sharing of pages can also be used to implement a high-performance message-passing system. Normally, when messages are passed, the data are copied from one address space to another, at considerable cost. If processes can control their page map, a message can be passed by having the sending process unmap the page(s) containing the message, and the receiving process mapping them in. Here only the page names have to be copied, instead of all the data.
Viraj Brian Wijesuriya Virtual Memory Management 54 / 55

Overview Demand Paging Page Replacement Algorithms Virtual Memory Interface

Basic Introduction

Distributed shared memory


The idea here is to allow multiple processes over a network to share a set of pages, possibly, but not necessarily, as a single shared linear address space. When a process references a page that is not currently mapped in, it gets a page fault. The page fault handler, which may be in the kernel or in user space, then locates the machine holding the page and sends it a message asking it to unmap the page and send it over the network. When the page arrives, it is mapped in and the faulting instruction is restarted.
Viraj Brian Wijesuriya Virtual Memory Management 55 / 55

You might also like