System software helps use the operating system and computer system. It includes diagnostic tools, compilers, servers, windowing systems, utilities, language translator, data communication programs, data management programs and more. The purpose of systems software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such accessory devices as communications, printers, readers, displays, keyboards, etc.
Temporary File
Temporary files may be created by computer programs for a variety of purposes; principally when a program cannot allocate enough memory for its tasks, when the program is working on data bigger than architecture's address space, or as a primitive form of inter-process communication.
Auxiliary memory
Modern operating systems employ virtual memory, however programs that use large amounts of data (e.g. video editing) may need to create temporary files.
Inter-process communication
Most operating systems offer primitives such as pipes, sockets or shared memory to pass data among programs, but often the simplest way (especially for programs that follow the Unix philosophy) is to write data into a temporary file and inform the receiving program of the location of the temporary file.
Cleanup
Some programs create temporary files and then leave them behind - they do not delete them. This can happen because the program crashed or the developer of the program simply forgot to add the code needed to delete the temporary files after the program is done with them. In Microsoft Windows the temporary files left behind by the programs accumulate over time and can take up a lot of disk space. System utilities, called temporary file cleaners or disk cleaners, can be used to address this issue. UNIX based operating systems don't suffer from the same problem because their temporary files are wiped at boot.
Usage
The usual filename extension for temporary files is ".TMP". Temporary files are normally created in a designated temporary directory reserved for the creation of temporary files.
Thin Client
A thin client (sometimes also called a lean or slim client) is a computer or a computer program which depends heavily on some other computer (its server) to fulfill its traditional computational roles.[1] This stands in contrast to the traditional fat client, a computer designed to take on these roles by itself. The exact roles assumed by the server may vary, from providing data persistence (for example, for diskless nodes) to actual information processing on the client's behalf.
Thin clients occur as components of a broader computer infrastructure, where many clients share their computations with the same server. As such, thin client infrastructures can be viewed as the amortization of some computing service across several user-interfaces. This is desirable in contexts where individual fat clients have much more functionality or power than the infrastructure either requires or uses. This can be contrasted, for example, with grid computing.
An Aleutia E3 thin client, with flash memory
The most common sort of modern thin client is a low-end microcomputer which concentrates solely on providing a graphical user interface to the end-user. The remaining functionality, in particular the operating system, is provided by the server.
Thin clients as programs
The notion of a thin client extends directly to any client-server architecture: in which case, a thin client application is simply one which relies on its server to process most or all of its business logic. This idiom is relatively common for computer security reasons: a client obviously cannot be trusted with the logic that determines how trustworthy they are; an adversary would simply skip the logic and say "I'm as trustworthy as possible!"
However, in web development in particular, client applications are becoming fatter. This is due to the adoption of heavily client-side technologies like Ajax and Flash, which are themselves strongly driven by the highly interactive nature of Web 2.0 aplications.
A renewed interest in virtual private servers, with many virtualization programs coming to a ripe stage, means that servers on the web today may handle many different client businesses. This can be thought of as having a thin-client "virtual server" which depends on the actual host in which it runs to do all of its computation for it. The end result, at least, is the same: amortization of the computing service across many clients.
Characteristics
A Neoware m100 thin client
The advantages and problems of centralizing a computational resource are varied, and this page cannot exhaustively enumerate all of them. However, these advantages and problems tend to be related to certain characteristics of the thin-client architecture itself.
Single point of failure
The server, in taking on the entire processing load of several clients, forms a single point of failure for those clients. This has both positive and negative aspects. On the one hand, the security threat model for the software becomes entirely confined to the servers: the clients simply don't run the software. Thus, only a small number of computers can be very rigorously secured, rather than securing every single client computer. On the other hand, any denial of service attack against the server will harm many clients: so, if one user crashes the system, everyone else loses their volatile data; if one user infects their computer with a virus, then the entire server is infected with that virus.
For small networks, this single-point of failure property might even be expanded: the server can be integrated with file servers and print servers particular to its clients. This simplifies the network and its maintenance, but might increase the risk against that server.
Cheap client hardware
While the server must be robust enough to handle several client sessions at once, the clients can be made out of much cheaper hardware than a fat client can. This reduces the power consumption of those clients, and makes the system marginally scalable: it is relatively cheap to add on a couple more client terminals. The thin clients themselves in general have a very low total cost of ownership, but some of that is offset by requiring a robust server infrastructure with backups and so forth[2]. This is also reflected in terms of power consumption: the thin clients are generally very low-power and might not even require cooling fans, but the servers are higher-power and require an air-conditioned server room.
On the other hand, while the total cost of ownership is low, the individual performance of the clients is also low. Thin clients, for example, are not suited to any real form of distributed computing. The costs of compiling software, rendering video, or any other computationally intensive task will be shared by all clients via the server.
Client simplicity
Since the clients are made from low-cost hardware with few moving parts, they can operate in more hostile environments than conventional computers. However, they inevitably need a network connection to their server, which must be isolated from such hostile environments. Since thin clients are cheap, they offer a low risk of theft in general, and are easy to replace when they are stolen or broken. Since they don't have any complicated boot images, the problem of boot image control is centralized to the central servers.
Three-Tier Architecture
Three-tier'[2] is a client-server architecture in which the user interface, functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms.
The three-tier model is considered to be a software architecture and a software design pattern.
Apart from the usual advantages of modular software with well defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. For example, a change of operating system in the presentation tier would only affect the user interface code.
Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture").
Three-tier architecture has the following three tiers:
Presentation tier
This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.
Application tier (Business Logic/Logic Tier)
The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.
Data tier
This tier consists of Database Servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.
From Wikipedia, the free encyclopedia
Jump to: navigation, search
-
File
-
File history
-
File links
Overview_of_a_three-tier_application
Value Added Reseller (VAR)
A value-added reseller (VAR) is a company that adds some feature(s) to an existing product(s), then resells it (usually to end-users) as an integrated product or complete "turn-key" solution. This practice is common in the electronics industry, where, for example, a software application might be added to existing hardware.
This value can come from professional services such as integrating, customizing, consulting, training and implementation. The value can also be added by developing a specific application for the product designed for the customer's needs which is then resold as a new package..
The term is often used in the computer industry, where a company purchases computer components and builds a fully operational personal computer system usually customized for a specific task such as non-linear video editing. By doing this, the company has added value above the cost of the individual computer components. Customers would purchase the system from the reseller if they lack the time or experience to assemble the system themselves.
Resellers also have pre-negotiated pricing that enables them to discount more than a customer would see going direct. This is because a reseller has already qualified for higher tiered discounting due to previous engagements with other clients, and the strategic partnership between the vendor and VAR inherently brings the vendor more business. The VAR can also partner with many vendors, helping the client when deciding which solution is truly best for their unique environment rather than trusting each vendor who believes their solution is best when it may not be the case.
Validity Check
A Validation rule is a criterion used in the process of data validation, carried out after the data has been encoded onto an input medium and involves a data vet or validation program. This is distinct from formal verification, where the operation of a program is determined to be that which was intended, and that meets the purpose.
The method is to check that data falls the appropriate parameters defined by the systems analyst. A judgement as to whether data is valid is made possible by the validation program, but it cannot ensure complete accuracy. This can only be achieved through the use of all the clerical and computer controls built into the system at the design stage. The difference between data validity and accuracy can be illustrated with a trivial example. A company has established a Personnel file and each record contains a field for the Job Grade. The permitted values are A, B, C, or D. An entry in a record may be valid and accepted by the system if it is one of these characters, but it may not be the correct grade for the individual worker concerned. Whether a grade is correct can only be established by clerical checks or by reference to other files. During systems design, therefore, data definitions are established which place limits on what constitutes valid data. Using these data definitions, a range of software validation checks can be carried out.
Share with your friends: |