Professional Documents
Culture Documents
1 / 55
2 / 55
Certain options and features of a program may be used rarely, for instance, the routines on U.S. government computers that balance the budget have not been used in many years.
Even in those cases where the entire program is needed, it may not all be needed at the same time.
3 / 55
7 / 55
Overview
With demand-paged virtual memory, pages are only loaded when they are demanded during program execution. Pages that are never accessed are thus never loaded into physical memory. A demand-paging system is similar to a paging system with swapping where processes reside in secondary memory (usually a disk). A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process, thus we use pager, rather than swapper, in connection with demand paging.
Viraj Brian Wijesuriya Virtual Memory Management 9 / 55
10 / 55
Protection
We need some form of hardware support to distinguish between the pages that are in memory and the pages that are on the disk. The valid -invalid bit scheme can be used for this purpose. When this bit is set to valid the associated page is both legal and in memory. If the bit is set to invalid, the page either is not valid (that is, not in the logical address space of the process) or is valid but is currently on the disk. While the process executes and accesses pages that are memory resident, execution proceeds normally.
Viraj Brian Wijesuriya Virtual Memory Management 11 / 55
12 / 55
13 / 55
Locality of reference
Some programs could access several new pages of memory with each instruction execution (one page for the instruction and many for data), possibly causing multiple page faults per instruction. This situation would result in unacceptable system performance. Fortunately, analysis of running processes shows that this behaviour is exceedingly unlikely. Programs tend to have locality of reference, which results in reasonable performance from demand paging.
Viraj Brian Wijesuriya Virtual Memory Management 15 / 55
Hardware support
The hardware to support demand paging is the same as the hardware for paging and swapping: Page table: This table has the ability to mark an entry invalid through a valid -invalid bit or a special value of protection bits. Secondary memory: This memory holds those pages that are not present in main memory and it is usually a high-speed disk and it is known as the swap device, and the section of disk used for this purpose is known as Swap-space.
16 / 55
Restarting an instruction
When the page fault occurs, we must be able to restart the process in exactly the same place and state, except that the desired page is now in memory and is accessible. A page fault may occur at any memory reference and if the page fault occurs on the instruction fetch, we can restart by fetching the instruction again or if a page fault occurs while we are fetching an operand, we must fetch and decode the instruction again and then fetch the operand. The major diculty arises when one instruction may modify several dierent locations.
Viraj Brian Wijesuriya Virtual Memory Management 17 / 55
Demand paging can signicantly aect the performance of a computer system. Let p be the probability of a page fault and let ma be the memory access time. The eective access time is then: eective access time = (1 - p ) ma + p page fault time.
19 / 55
With an average page-fault service time of 8 milliseconds and a memory access time of 200 nanoseconds, the eective access time in nanoseconds is:
21 / 55
22 / 55
Terminology (1)
Traditionally, fork() worked by creating a copy of the parents address space for the child, duplicating the pages belonging to the parent. However, considering that many child processes invoke the exec() system call immediately after creation, the copying of the parents address space may be unnecessary.
25 / 55
Terminology (2)
Instead, we can use a technique known as copy-on-write (COW) which works by allowing the parent and child processes initially to share the same pages and these shared pages are marked as copy-on-write pages, meaning that if either process writes to a shared page, a copy of the shared page is created. When the copy-on-write technique is used, only the pages that are modied by either process are copied, all unmodied pages can be shared by the parent and child processes.
26 / 55
Example
Copy-on-write is a common technique used by several operating systems, including Windows XP, Linux, and Solaris.
27 / 55
vfork() (1)
Several versions of UNIX (including Solaris and Linux) provide a variation of the fork() system call: vfork() (for virtual memory fork). With vfork(), the parent process is suspended, and the child process uses the address space of the parent. Because vfork() does not use copy-on-write, if the child process changes any pages of the parents address space, the altered pages will be visible to the parent once it resumes. Therefore, vfork() must be used with caution to ensure that the child process does not modify the address space of the parent.
Viraj Brian Wijesuriya Virtual Memory Management 29 / 55
vfork() (2)
vfork() is intended to be used when the child process calls exec() immediately after creation. Because no copying of pages takes place, vfork() is an extremely ecient method of process creation and is sometimes used to implement UNIX command-line shell interfaces. vfork() simply creates a new process and share virtual memory. vfork() do not fully copy the address space of the parent process, but borrows the parents memory and thread of control.
Viraj Brian Wijesuriya Virtual Memory Management 30 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Overview
When a page fault occurs, the operating system has to choose a page to evict (remove from memory) to make room for incoming page. If the page removed has been modied while in memory, it must be rewritten to the disk to bring the disk copy up to date. If the page has not been changed, the disk copy is already up to date, rewrite is not needed. The page to be read in just overwrites the page being evicted.
Viraj Brian Wijesuriya Virtual Memory Management 32 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
34 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
35 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
36 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Overview
The least recently used page replacement algorithm (LRU) is based on the observation that pages that have been heavily used in the last few instructions will probably be heavily used again in the next few. When a page fault occurs, throw out the page that has been unused for the longest time. Although LRU is theoretically realizable, it is not cheap. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory, with the most recently used page at the front and the least recently used page at the rear.
Viraj Brian Wijesuriya Virtual Memory Management 37 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
LRU implementation
However, there are other ways to implement LRU with special hardware: Using a counter. Using a stack. Using a matrix.
38 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Using a counter
With a 64-bit counter, C, that is automatically incremented after each instruction. Furthermore, each page table entry must also have a eld large enough to contain the counter. After each memory reference, the current value of C is stored in the page table entry for the page just referenced. When a page fault occurs, the operating system examines all the counters in the page table to nd the lowest one and that page is the least recently used.
Viraj Brian Wijesuriya Virtual Memory Management 39 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
40 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Using a stack
Another approach to implementing LRU replacement is to keep a stack of page numbers. Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most recently used page is always at the top of the stack and the least recently used page is always at the bottom. Because entries must be removed from the middle of the stack, it is best to implement this approach by using a doubly linked list with a head pointer and a tail pointer. The tail pointer points to the bottom of the stack, which is the LRU page.
Viraj Brian Wijesuriya Virtual Memory Management 41 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
42 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Using a matrix
For a machine with n page frames, the LRU hardware can maintain a matrix of nn bits, initially all zero. Whenever page frame k is referenced, the hardware rst sets all the bits of row k to 1, then sets all the bits of column k to 0. At any instant of time, the row whose binary value is lowest is the least recently used. The row whose value is next lowest is next least recently used, and so forth.
Viraj Brian Wijesuriya Virtual Memory Management 43 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
44 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Overview
Most processes exhibit a locality of reference even they are using demand paging. Meaning that during any phase of execution, the process references only a relatively small fraction of its pages. The set of pages that a process is currently using is known as its working set. If the entire working set is in memory, the process will run without causing many faults but if the available memory is too small to hold the entire working set, the process will cause many page faults and run slowly.
Viraj Brian Wijesuriya Virtual Memory Management 45 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Thrashing
If the process does not have the number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it must replace some page but since all its pages are in active use, it must replace a page that will be needed again right away. Consequently, it quickly faults again, and again, and again, replacing pages that it must back in immediately. This high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing.
Viraj Brian Wijesuriya Virtual Memory Management 46 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
48 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
If the whole working set is not in memory, a particular process is not allowed to run. A process may be suspended due to this. It is impossible to know when a working set might change or what is the working set window size should be practically. Thus, this method is not frequently used.
49 / 55
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
Basic Page Replacement LRU Page Replacement Algorithm The Working Set Page Replacement Algorithm
As with the working-set strategy, we may have to suspend a process. If the page-fault rate increases and no free frames are available, we must select some process and suspend it. The freed frames are then distributed to processes with high page-fault rates.
51 / 55
Basic Introduction
Overview
Up to now, it is assumed that virtual memory is transparent to processes and programmers. That is, all they see is a large virtual address space on a computer with a smaller physical memory. With many systems, that is true. But in some advanced systems, programmers have some control over the memory map and can use it in non-traditional ways to enhance program behaviour.
52 / 55
Basic Introduction
Basic Introduction
Basic Introduction