When a process is admitted into the system, it requires memory, so a memory management algorithm is needed to allocate space to it. When the processes no longer exists, the memory manager should also free up the memory.
When referring to addresses, we encounter two types of addresses:
Logical Addresses - Generated by the CPU, also known as the Virtual Address
Physical Addresses - An actual memory location seen by the RAM
There are three ways a program can be situated in memory is referred to as how the program is "bound" to the memory.
The first type of Address Binding is the Compile-Time address binding. Processes that always end up in the same spot uses Compile-Time. Compilers generate absolute addresses (e.g. 80h) during compilation.
Processes that appear at different locations everytime it is run uses Load-Time address binding. The compiler would generate relocatable (i.e. Relative) addresses (e.g. Fish.dll+5000) and the linkage editor/loader would generate the absolute address.
Processes that appear at different locations during execution uses Execution-Time address binding. The compiler would generate relocatable (i.e. Relative) addresses (e.g. Fish.dll+5000) and the linkage editor/loader would generate the absolute address. There are additional control schemes present which would not be discussed.
Access to the actual physical memory (i.e. Memory location 5638 on the RAM) is controlled by the MMU (Memory Management Unit). Whenever a context switch occurs, the new process's Memory Base (i.e. Where it is located on the RAM) is written to the Relocation Register on the MMU.
Whenever the program wants to access a memory location, it refers to its Virtual Address (e.g. Address location 638), and the MMU would translate this into the Physical Address by adding the base (If the base is 5000, then it would send a request to location 5638).
The program only deals with the logical address, that is, from 0 to limit where limit is the maximum size of the process.
Here's a diagram that shows the MMU:
(Taken from recommended text)
Now that we know WHERE to locate a process, we would need to go into HOW the program is allocated its memory. There are two types of allocation: Contiguous and non-contiguous.
In contiguous memory allocation, as its name suggests, a process can only be allocated into a contiguous location. If a memory location cannot fit memory requirements of the program, the program cannot be partially loaded there.
The text discusses three types of Contiguous Memory Allocation:
First Fit - Fit it in the first available space that can accommodate the program
Worst Fit - Fit it in the largest available space that can accommodate the program
Best Fit - Fit it in the smallest available space that can accommodate the program
The actual definition of Internal and External fragmentation is different from what we've been learning so I'll just show you the actual one (not in the seminar).
Internal Fragmentation is memory allocated that is not used AND CANNOT be used by other processes. Take for example a memory allocation scheme where memory is allocated only by blocks of 32KB. If a process requires 100KB, it would be allocated 128KB minimum. The extra 28KB is internal fragmentation because it is allocated, is not used, and cannot be used by another process.
Another Internal Fragmentation example is the use of MPS (Multiple Partition Allocation Scheme). In MPS, memory is broken into many partitions. The rule here is that each partition can only fit ONE PROCESS. After the process is fitted into a partition through a memory allocation algorithm, the extra space (i.e. 90KB process, but 95KB partition) is said to be Internal Fragmentation because a 3KB process cannot be put in there even though there is 5KB left. (Only one process per partition).
If a RAM has multiple partitions, and somehow each partition allows more than one process, then there is NO (or negligible) internal fragmentation.
Now comes External Fragmentation. External Fragmentation is fragmentation that is a result of the memory being broken by the creation and deletion of processes. Take this example of a single partition:
At first, A, B and C are loaded:
A: 10KB
B: 20KB
C: 30KB
Free: 40KB
Now, B is taken out, leaving a hole of 20KB there:
A: 10KB
Free: 20KB
C: 30KB
Free: 40KB
Process D wants to be loaded, and it requires 50KB of memory. The total free space is 60KB, but the space is suffering from External Fragmentation. The process must be contiguous so the process cannot be loaded in either hole. The External Fragmentation is said to be 60KB.
How much Internal Fragmentation is there? Notice that there exists no minimum allocation block. In this example, there is no space DENIED to be used (e.g. If there is a new 15KB process, it can use the 20KB free space, leaving 5KB for other processes) so Internal Fragmentation is negligible here.
If there are multiple partitions, and it is using the MPS scheme (only one process per partition) then the External Fragmentation is the sum of ALL empty partitions. If there are multiple partitions and more than one process can fit, then the External Fragmentation is the sum of ALL free space.
As there are some confusions about Compaction, I'll leave it out until it is confirmed.
Now we'll talk about Non-contiguous memory allocation. The first method we are going to discuss is Paging. Paging simplifies things by breaking everything into fixed-sized blocks. That is to say, Logical memory is broken into Pages of size x, Physical memory is broken into Frames of size x, and the backing store (secondary storage device) is broken into Frames of size x.
A Page Table keeps track of all pages. Think of the Page Table as an array:
pageTable[x];
x is the number of pages a program has. Each entry in the array contains a pointer to the base address of the page stored in the physical memory. This is to say, if we are looking for Page 5 (where the first page is 0), we look for &pageTable[5] (the value pointed to by the sixth index).
(Taken from recommended text)
Each memory location generated by the processor (i.e. A Virtual Address), is converted into a Page Number and an Offset. The Virtual Address is translated as such:
pageNumber = virtualAddress/blockSize;
offset = virtualAddress%blockSize;
The actual physical address accessed would be translated as such:
&(pageTable[pageNumber]+offset)
The advantage of Paging is that it gets rid of External Fragmentation totally. However, Internal Fragmentation still applies (recall the example above where a process is 100KB and the allocation size is 32KB).
At times when the page table is larger than the number of pages a process has, there is a Valid/Invalid bit set for each entry in the Page Table (e.g. If a process has pages 0 to 5, but is given a Page Table of 0 to 8, then indexes 0 to 5 are set to valid, while 6 to 8 is set to invalid). Access to invalid pages would result in a trap.
As you might have guessed, the page table is stored in none other than the PCB. Other than the Valid/Invalid bits, the system can also make use of a PTLR (Page-Table Length Register) to keep track of the length of the Page Table.
Another non-contiguous memory allocation method is Segmentation. Segmentation is similar to paging, except that it allocates memory based on segments instead of blocks and frames. This way, since the allocated memory is exact, there is no Internal fragmentation. In a segmentation system, there is only external fragmentation.
Instead of a Page Table, we have a Segmentation Table. It works similar to the Page Table, except that it now looks like an array like this:
segmentTable[numberOfSegments][2]
The first dimension correlates to the segment number, while the second dimension correlates to Limit and Base. Limit simply means the length of the segment, and base defines the physical address where the segment begins.
(Taken from recommended text)
The compiler first divides the process into its different parts. Each part (e.g. Subroutine, Stack, Symbol Table, Functions, Main Program) resides in a different segment. The user's access to the memory is now through:
physicalMemory = &segmentTable[segmentNumber]+offset;
Wednesday, February 23, 2011
Subscribe to:
Post Comments
(
Atom
)
No comments :
Post a Comment