2. INTRODUCTION In case of embedded systems, the rise in processing speeds of embedded processors and microcontroller evolution has lead to the possibility of running computation and data intensive applications on small embedded devices that earlier only ran on desktop-class systems. From a memory stand point, there is a similar need for running larger and more data intensive applications on embedded devices.However, support for large memory address spaces, specifically, virtual memory, for MMU-less embedded systems is always lacking. But by implementing virtual memory scheme for MMU-less systems as a software , based on an application level virtual memory library and a virtual memory aware assembler. This type of implementation is open to programmer and always available for changes.
3. Many software developers in recent years have turned to Linux as their operating system of choice. Until the advent of uClinux developers of smaller embedded systems, usually incorporating microprocessors with no memory management unit could not take advantage of Linux in their designs. UClinux is a variant of mainstream Linux that runs on 'MMU-less' processor architectures. Component costs are of primary concern in embedded systems, which are typically required to be small and inexpensive. Microprocessors with on-chip memory management unit (MMU) hardware tend to be complex and expensive, and as such are not typically selected for small, simple embedded systems which do not require them. With many advantages of MMU less Systems (like : speed and cost efective ness) use of them is increasing day by day and a proper , stable and real time Operating system is required , and in this paper i am going to study this type of operating systems and schemes of implementing Virtual memory using software.
4.
5.
6.
7.
8.
9.
10. Memory Management and Memory Management Unit Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware component responsible for handling accesses to memory requested by the central processing unit (CPU). Its functions include translation of virtual addresses to physical addresses (i.e.,virtual memory management ) , memory protection, cache control, bus arbitration etc.
11.
12. Why create an MMU-less Linux? Linux has become popular on embedded devices—especially consumer gadgets, telecom routers and switches, Internet appliances and automotive applications. Because of the modular nature of Linux, it is easy to slim down the operating environment by removing utility programs, tools, and other system services that are not needed in an embedded environment. Advantages of Linux is that it is a fully functional OS, with support for network that is becoming a very important requirement in embedded systems. We can add or unload modules from the kernel at runtime, this makes embedded Linux very flexible. It is more encouraging that the Linux code is widely available , portable to any processor, scalable and stable .
13.
14. Diagram Below shows the data flow towards target system , The application source code. Important is Application’s view of address space is as large as the secondary storage i.e., the virtual address space. The virtual memory library. This library consists of an implementation of virtual to physical address translation (vm.c). It also includes a header file (vm.h) with configurable parameters (page size, ram size etc.)
15.
16.
17.
18. Virtual Memory Approach Algorithm for Virtual to Physical memory Address Translation 1: function virtual-to-physical 2: Input va : virtual address, wr : 1 => write ;0 => read 3: Output pa : physical address 4: Decompose va into ( tag ; page ; off set ) 5: pte PTE [ page ] 6: if pte . tag = va . tag and pte . valid = 1 then 7: if wr = 1 then 8: pte : dirty 1 9: end if 10: return pa = page * sizeof ( page ) + offset 11: end if 12: if pte . valid = 1 and pte . dirty = 1 then 13: write RAM [ page ] to secondary store 14: end if 15: Read newpage | va (belongs to) newpage from secondary store 16: Update pte . tag, pte . page , pte . valid 17: if wr = 1 then 18: pte . dirty 1 19: end if 20: return pa = page * sizeof ( page ) + offset
19. Approach 1 - Pure VM In application every memory access is in a virtual address space which is translated to physical address during runtime using a predesigned algorithm. This is a transparent approach towards application where they access memory directly and have full control on it. Drwaback : When every memory address is virtualized algorithm(function) to change virtual address to physical address is called which is an extra load on the system.
20. Approach 2 - Fixed Address VM In this approach, a region of the memory is marked as virtualized. Any memory access (load/store) that belongs to this marked region is translated. This approach requires the programmer to indicate to the vm-assembler the region marked as virtual. As opposed to the previous approach, in this case, the overhead of translation from virtual to physical address is reduced to only the memory region marked as virtual. This however, requires a runtime check to be made at every load/store to determine if the address is virtualized.This is achieved by modifying the vm-assembler so that it inserts code that does runtime check on every memory access and translates only those addresses that are virtualized. In our experiments, we tested this approach by marking all the data region belonging to global variables as belonging to virtual address space.
21. Approach 3 - Selective VM Selective VM is similar to the previous approach, but is more fine-grained in terms of memory that is virtualized. Note that in the previous approach, a runtime check was required on every memory access to determine if the address is virtualized. Selective VM avoids this runtime check overhead by annotating data structures at source level. It requires the programmer to tag individual data structures as belonging to virtual address space (as opposed to an entire region). This annotation is done at variable declaration, using a #pragma directive. Any use or def of annotated data structure in the source is modified to a call to the virtual-to-physical function. This approach significantly reduces the runtime overhead by restricting the translation only to large data structures that can reap benefit out of virtualization. It gives the embedded programmer more control on what is virtualized. However, this approach is the least transparent to the application programmer compared to the other two approaches.
22.
23.
24. Future Perspective My future work will focus on optimizing the virtual to physical translation that can lead to reduction in execution time cycles. I had also planned to focus on considering additional caching techniques , such as associative schemes. I am also trying to find some way of implementing software MMU for normal systems (Laptops or Desktops) so that the interaction of Memory can be reduced and hence increase in the speed of the system.
25. Conclusion Right now my study is towards a software virtual memory scheme for MMU-less embedded systems. I am trying this using a vm-aware assembler and a virtual memory library.The virtual memory system that is presented can be customized by adjusting two configuration parameters, namely, RAM size and Page size. My future work will focus on optimizing this virtual memory implementation that can lead to reduction in execution time cycles. Also i want to extend and use this project for reducing the energy consuption for Systems contaning MMU (CPU's , Laptops and other Embedded Devices).
26.
Hinweis der Redaktion
The processor generated virtual address consists of B-bits (If the secondary storage is large enough, we could have B = N ). This address is broken down into offset ( K least significant bits), page ( M - K bits) and tag ( N - M bits). The page bits of virtual address point to an entry in the PTE. The virtual address tag bits are compared to the tag field of this PTE. If the valid bit is set and the tag bits match, we have a hit (i.e., the physical address is in main memory). The actual physical address is computed by concatenating offset bits of virtual address with “page” entry of the PTE. There can be cases other than a hit, depending on the result of comparing tags, valid bit and dirty bit. These are described in the virtual-to-physical translation algorithm. This algorithm is similar to a direct mapped address translation used in traditional operating systems. Note that the virtual memory library, mentioned in previous section, has the implementation of 1.