Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (11.25 MB, 44 trang )
<span class="text_page_counter">Trang 1</span><div class="page_container" data-page="1">
THANK YOU
CHAPTER 1:cCompare Cache memory and Virtual memory
CHAPTER 2: Virtual Memory
I. Virtual Memory:... 16
II. The general idea of how Virtual memory works:...17
III. Benefits of Virtual Memory:...17
IV. Hardware and Control Structures:...18
V. Execution of a process:...18
VI. Implication... 18
VII. Real and Virtual Memory...20
VIII. Thrashing:... 22
IX. Principle of Locality:...23
X. Support Needed for Virtual Memory:...23
XI. Paging... 23
XII. Translation lookaside Buffer(TLB):...24
XIII. Associative Mapping:...27
XIV. Page Table and Virtual Memory:...30
XV. Inverted Page Table:...30
XVI. The Page Size Issue:... 31
XVII. Operating System Software:...32
XVIII. Fetch Policy:...32
XIX. Placement Policy:...32
XX. Replacement Policy:...32
XXI. Basic algorithms for the replacement policy:...33
1. The LRU Policy:...33
2. Note on counting page faults:...33
3. Implementation of the LRU Policy:...34
4. The FIFO Policy:...34
5. The Clock Policy:... 34
2. Kernel Memory Allocator...41
XXIII. Linux memory management...41
XXIV. Windows memory management...42
1. Window Virtual address map...42
2. Windows paging... 43
3. Android Memory management...43
CHAPTER 3: Flash Memory
I. What is flash memory:...44
II. PN Junction... 44
III. MOSFET... 45
IV. CMOS... 47
V. Reading NOR Flash and NAND Flash...47
1. Moving electrons: Flash Erase and Writes...48
2. Flash Erase and Write (NAND):...48
VI. Non-volatile flash memory stick and SD card:...49
</div><span class="text_page_counter">Trang 6</span><div class="page_container" data-page="6">memory. But, if its copy is already present in the cache memory then the program is directly executed.
- Virtual memory: Virtual memory increases the capacity of main memory. Virtual memory is nota storage unit, it’s a technique. In virtual memory, even such programs which have a larger size than the main memory are allowed to be executed.
- Difference between Virtual memory and Cache memory:
1. Virtual memory increases the capacityof main memory
While cache memory increases theaccessing speed of CPU
2. Virtual memory is not a memory unit,it’s a technique
Cache memory is exactly a memoryunit.
3. The size of virtual memory is greaterthan the cache memory.
While the size of cache memory is lessthan the virtual memory
4. Operating system manages the virtualmemory.
On the other hand hardware managesthe cache memory.
5. In virtual memory, the program withsize larger than the main memory areexecuted.
While in cache memory, recently useddata is copied into.
6. In virtual memory, mappingframeworks is needed for mapping
While in cache memory, no suchmapping frameworks is needed.
7. It is not as speedy as cache memory. It is a fast memory.8. Those data or programs are kept here
that are not completely get placed inthe main memory.
The frequently accessed data is kept incache memory in order to reduce theaccess time of files.
9. Users are able to execute the programsthat take up more memory than themain memory.
The time required by CPU to accessthe main memory is more thanaccessing the cache. That is the reasonfrequently accessed data is stored incache memory so that accessing timecan be minimized.
Virtual memory is a memory management technique where secondary memory can be used as if it were a part of the main memory. Virtual memory is a common technique used in a computer's operating system (OS).Virtual memory uses both hardware and software to enable a computer to compensate for physical memory shortages, temporarily transferring data from random access memory (RAM) to disk storage.
using large programs. However, users should not overly rely on virtual memory, since it is considerably slower than RAM. If the OS has to swap data between virtual memory and RAM too often, the computer will begin to slow down -- this is called thrashing.
When an application is in use, data from that program is stored in a physical address using RAM. A memory management unit maps the address to RAM and automatically translates addresses. The MMU can, for example, map a logical address space to a corresponding physical address. If, at any point, the RAM space is needed for something more urgent, data can be swapped out of RAM and into virtual memory.
The computer's memory manager is in charge of keeping track of the shifts between physical and virtual memory. This means using virtual memory generallycauses a noticeable reduction in performance.
By using virtual memory many applications or programs can be executed at a time.
Main memory has limited space but you can increase and decrease the size of virtual memory by yourself.
Users can run large programs that have a size greater than the main memoryThe data which is common in memory can be shared between RAM and virtual memory
CPU utilization can be increased because more processes can reside in the main memory
The cost of buying extra RAM is saved by using virtual memory
<b>IV.</b>
Two characteristics fundamental to memorymanagement:
i. All memory references are logical addresses that are dynamically translated into physical addresses at run time.
ii. A process may be broken up to into a number of pieces that don’t need to be contiguously located in main memory during execution.
If these two characteristics are present, it is not necessary that all of the pages or segments of a process be in main memory during execution.
Operating system brings into main memory a few pieces of the program.Resident set:
An interrupt is generated when an address that is not in main memory.Operating system places the process in a blocking state.
Piece of process that contains the logical address is brought into main memory.Operating system issues a disk I/O Read request.
Another process is dispatched to run while the disk I/O takes place.An interrupt is issued when disk I/O is complete, which cause the operating system to place the affected process in the Ready state
</div><span class="text_page_counter">Trang 10</span><div class="page_container" data-page="10">processes and be maintained in memory and therefore more processes can be maintained in memory. Furthermore, time is saved because unused pages are not swapped in and out of memory. In the steady-state practically, all of the main memory will be occupied with process pages, so that the processor and OS have direct access to as many processes as possible.
Main memory partitioned into small fixed- size chunks called frames
Main memory partitioned into small fixed- size chunks called frames
Main memory not
partitioned <sup>Main memory not partitioned </sup>
Program broken into pages by the compiler or memory manage- ment system
Program broken into pages by the compiler or memory management system
Program segments specified by the programmer to the compiler (i.e., the decision is made by the programmer)
Program segments specified by the programmer to the compiler (i.e., thedecision is made by the programmer)
Internal fragmentation within frames
Internal fragmentation within frames
No internal
fragmentation <sup>No internal fragmentation </sup>No external fragmentation No external fragmentation External fragmentation External fragmentation
maintain a page table for each process showing which frame each page occupies
maintain a page table for each process showing which frame each page occupies
maintain a segment table for each process showingthe load address and length of each segment
Operating system must maintain a segment table for each process showing the load address and length of each segment
Operating system must maintain a free frame list
Operating system must maintain a free frame list
Operating system must maintain a list of free holes in main memory
Operating system must maintain a list of free holes in main memory
Processor uses page number, offset to calculateabsolute address
Processor uses page number, offset to calculate absolute address
Processor uses segment number, offset to calculate absolute address
Processor uses segment number, offset to calculate absolute address
All the pages of a process must be in main memory for process to run, unless overlays are used
Not all pages of a process need be in main memory frames for the process to run. Pages may be read in as needed
All the segments of a process must be in main memory for process to run, unless overlays are used
Not all segments of a process need be in main memory frames for the process to run. Segments may be read in as needed
Reading a page into main memory may require writing a page out to disk
Reading a segment into main memory may re- quire writing one or more segments out to disk
In the steady state, practically all of main memory will be occupied with process pieces, so that the processor and operating system have direct access to as many processes as possible. Thus, when the operating system brings one piece in, it must throw another out. In essence, the operating system tries to guess, based on recent history, which pieces are least likely to be used in the near future.
Program and data references within a process tend to cluster.
Only a few pieces of a process will be needed over a short period of time.Therefore it is possible to make intelligent guesses about which pieces will be needed in the future.
Avoid thrashing.
Typically, each process has its own page table
Each page table entry contains a present bit to indicate whether the page is in main memory or not.
i. If it is in main memory, the entry contains the frame number of the corresponding page in main memory
ii. If it is not in main memory, the entry may contain the address of that page on disk or the page number may be used to index another table (often in the PCB) to obtain the address of that page on disk
A modified bit indicates if the page has been altered since it was last loaded into main memory
i. If no change has been made, the page does not have to be written to the disk when it needs to be swapped out
Other control bits may be present if protection is managed at the page level i. a read-only/read-write bit
ii. protection level bit: kernel page or user page (more bits are used when the processor supports more than 2 protection levels)
The TLB only contains some of the page table entries so we cannot simply index into the TLB based on page number.
Each TLB entry must include the page number as well as the complete page table entry.
The processor a number of TLB entries to determine if there is a match on page number.
The Smaller the page size, the lesser the amount of internal fragmentationHowever, more pages are required per process
More pages per process means larger page tables
For large programs in a heavily multiprogram med environment some portion of the page tables of active processes must be in virtual memory instead of main memory
The physical characteristics of most secondary-memory devices favor a larger page size for more efficient block transfer of data
Most computer systems support a very large virtual address space
ii. If (only) 32 bits are used with 4KB pages, a page table may have 2 <small>20</small>entries
The entire page table may take up too much main memory. Hence, page tables are often also stored in virtual memory and subjected to paging
i. When a process is running, part of its page table must be in main memory (including the page table entry of the currently executing page)
Another solution (PowerPC, IBM Risk 6000) to the problem of maintaining large page tables is to use an Inverted Page Table (IPT)
We generally have only one IPT for the whole system
There is only one IPT entry per physical frame (rather than one per virtual page)i. this reduces a lot the amount of memory needed for page tablesThe 1st entry of the IPT is for frame #1 ... the nth entry of the IPT is for frame #n
and each of these entries contains the virtual page numberThus, this table is inverted
The process ID with the virtual page number could be used to search the IPT to obtain the frame #
For better performance, hashing is used to obtain a hash table entry which points to a IPT entry
i. A page fault occurs if no match is found ii. chaining is used to manage hashing overflow
Page size is defined by hardware; always a power of 2 for more efficient logical to physical address translation. But exactly which size to use is a difficult question:
i. Large page size is good since for a small page size, more pages are required per process
ii. More pages per process means larger page tables. Hence, a large portion of page tables in virtual memory
iii. Small page size is good to minimize internal fragmentationiv. Large page size is good since disks are designed to efficiently transfer
large blocks of data
v. Larger page sizes mean less pages in main memory; this increases the TLB hit ratio
With a very small page size, each pagematches the code that is actuallyused: faults are low
Increased page size causes each pageto contain more code that is notused. Page faults rise.Page faults decrease if we can
approach point P were the size of apage is equal to the size of the entireprocess
Page fault rate is also determined by the number of frames allocated per processPage faults drops to a reasonable value when W frames are allocatedDrops to 0 when the number (N) of frames is such that a process is entirely in
memory
Page sizes from 1KB to 4KB are most commonly used