In contrast to the .NET Framework, which is available for the desktop or server running Windows, the .NET Compact Framework is designed to run on a variety of different operating systems and has a unique architecture to provide for this cross-OS functionality. Known as the Platform Adaptation Layer, this architecture provides an abstraction layer between the host operating system’s specific APIs and the .NET Compact Framework’s requirements. In the future, the .NET Compact Framework could easily be ported to other host operating systems by creating a corresponding PAL to match the common language runtime’s (CLR) requirements and the new host operating system’s capabilities.
Built on top of the PAL (see Figure 2.7), the .NET Compact Framework implements a CLR. The runtime can run Microsoft Intermediate Language, a processor-independent, model operation code language that the compiler emits. It does this using a just-in-time compiler (JIT) to convert this MSIL to the specific processor’s machine code. For each processor the .NET Compact Framework supports, there is a corresponding runtime implementation that JIT compiles and that runs the applications. The .NET Compact Framework supports all of the processors that the Windows CE operating system supports, including StrongARM, MIPS, x86, SH4, XScale, and several other related processors.
Built on top of the CLR is the programming infrastructure from which the entire system gets its name, the .NET Compact Framework. The .NET Compact Framework’s core is a set of CLI-compliant base class libraries that provide building block functionality for all applications including basic file I/O, networking, and XML.
In addition to meeting these ECMA-defined specifications, the .NET Compact Framework includes higher-level functionality such as XML Web service support and a graphics and Windows Forms library that exploits the graphics capabilities of the Windows CE operating system and Microsoft Pocket PC. An interoperability mechanism that manages calls between the CLR and natively compiled system components provides access to the
Figure 2.7. The .NET Component Framework.
platform’s native software components. Finally, the .NET Compact Framework implements ADO.NET, the data-access technology available in the .NET Framework. This data-access model consists of an in-memory relation store that easily persists on disk as an XML file or mapped to a device or server database. This lets developers access data easily from remote databases, handle it in memory, store it in local sources, and synchronize local sources with remote ones.
2.5 CHOOSING THE RIGHT PROGRAMMING LANGUAGE
Like the .NET Framework, the .NET Compact Framework answers the age-old question, “What language is best?” Its novel response is, “It doesn’t matter!” All code is compiled first to MSIL and then to native code. Therefore, how the MSIL was originally created does not impact the JIT, so developers can choose the language best suited to the task at hand.
Microsoft currently supports two languages on the .NET Compact Framework; C# (pronounced C-sharp) and Visual Basic .NET. Because of this sourcecode-agnostic platform, Microsoft or third-party language vendors might eventually add support for other languages to meet developers’ language preference demands. Indeed, at the time of writing this article, several independent language and tools vendors are implementing their own languages on the .NET Compact Framework.
2.6 BRINGING BIG FEATURES TO SMALL DEVICES
The fundamental problems in designing the .NET Compact Framework revolve around one concept: how to redesign and refactor the feature-rich .NET Framework, which is designed to run on desktop PCs and servers. Machines running the .NET Framework typically run on Pentium-class or higher processors with copious memory, whereas the .NET Compact Framework’s hosts are small devices with few megabytes of memory and processing speeds that might not even reach the triple digits. Thus, the design goals were twofold:
-
Decrease the system’s footprint from the 20-plus Mbytes that appear on machines running Windows to approximately 1.5 Mbytes.
-
Optimize the common language runtime’s various components to provide the fastest-possible execution time on small devices.
The common language runtime must use JIT to compile code to run on the host device and manage the memory available to maximize the device’s performance.
In the .NET Compact Framework, the optimized JIT compiles from MSIL to native code on a method-by-method or type-by-type basis. It does not compile a segment of MSIL containing a particular method or type to native code until the first time it is called. Once it compiles that method, it caches the natively compiled code for later reuse. The obvious advantage to this system—compared to an interpreter-based system—is that each and every time a particular method is called, it does not need to be recompiled but, rather, can run as instructions native to the processor. This adds overhead on the first method call in a particular instance of an application. However, this cost is amortized over subsequent calls of the method, bringing the execution close to native speed over the life of the application and reducing battery consumption by running more efficient code.
To optimize the memory used for execution, the CLR also has the prerogative in situations of extreme memory pressure to “pitch” compiled code—that is, to throw cached code out of memory to provide a larger space in which the application can continue to run. This happens only in extreme situations, such as when the application loads large volumes of data into memory. It lets the CLR reclaim memory occupied by seldom-run code to store data or cache code that is run more frequently.
Finally, the garbage collector cleans up after the developer, while the running application allocates and discards objects and types. The garbage collector is a simple mark-and-sweep collector that periodically marks memory that is marked for edit—that is, memory containing objects and types no longer in scope. When the ratio of marked memory to in-use memory hits a certain threshold, heap compaction occurs, creating larger open pieces of memory that the runtime can reallocate to the application for either code or data storage.
The obvious advantage to this system is that each time a particular method is called, it does not need to be recompiled but, rather, can run as instructions native to the processor.
Share with your friends: |