An example of a validation check is the procedure used to verify an ISBN.[1]
Size. The number of characters in a data item value is checked; for example, an ISBN must consist of 10 characters only (in the previous version--the standard for 1997 and later has been changed to 13 characters.)
Format checks. Data must conform to a specified format. Thus, the first 9 characters must be the digits 0 through 9' the 10th must be either those digits or an X
Consistency. Codes in the data items which are related in some way can thus be checked for the consistency of their relationship. The first number of the ISBN designates the language of publication. for example, books published in French-speaking countries carry the digit "2". This must match the address of the publisher, as given elsewhere in the record. .
Range. Does not apply to ISBN, but typically data must lie within maximum and minimum preset values. For example, customer account numbers may be restricted within the values 10000 to 20000, if this is the arbitrary range of the numbers used for the system.
Check digit. An extra digit calculated on, for example, an account number, can be used as a self-checking device. When the number is input to the computer, the validation program carries out a calculation similar to that used to generate the check digit originally and thus checks its validity. This kind of check will highlight transcription errors where two or more digits have been transposed or put in the wrong order. The 10th character of the 10-character ISBN is the check digit.
Virtual Memory
Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage.
Developed for multitasking kernels, virtual memory provides two primary functions:
-
Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode.
-
Each process sees one contiguous block of free memory upon launch. Fragmentation is hidden.
All implementations (excluding emulators) require hardware support. This is typically in the form of a Memory Management Unit built into the CPU.
Systems that use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory. Virtual memory differs significantly from memory virtualization in that virtual memory allows resources to be virtualized as memory for a specific system, as opposed to a large pool of memory being virtualized as smaller pools for many different systems.
Note that "virtual memory" is more than just "using disk space to extend physical memory size" - that is merely the extension of the memory hierarchy to include hard disk drives. Extending memory to disk is a normal consequence of using virtual memory techniques, but could be done by other means such as overlays or swapping programs and their data completely out to disk while they are inactive. The definition of "virtual memory" is based on redefining the address space with a contiguous virtual memory addresses to "trick" programs into thinking they are using large blocks of contiguous addresses.
Modern general-purpose computer operating systems generally use virtual memory techniques for ordinary applications, such as word processors, spreadsheets, multimedia players, accounting, etc., except where the required hardware support (a memory management unit) is unavailable. Older operating systems, such as DOS[1] of the 1980s, or those for the mainframes of the 1960s, generally had no virtual memory functionality - notable exceptions being the Atlas, B5000 and Apple Computer's Lisa.
Embedded systems and other special-purpose computer systems which require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism. This is based on the idea that unpredictable processor exceptions produce unwanted jitter on CPU operated I/O, which the smaller embedded processors often perform directly to keep cost and power consumption low. And the associated simple application has little use for multitasking features.
Paged virtual memory
Almost all implementations of virtual memory divide the virtual address space of an application program into pages; a page is a block of contiguous virtual memory addresses. Pages are usually at least 4K bytes in size, and systems with large virtual address ranges or large amounts of real memory (e.g. RAM) generally use larger page sizes.
Page tables
Almost all implementations use page tables to translate the virtual addresses seen by the application program into physical addresses (also referred to as "real addresses") used by the hardware to process instructions. Each entry in the page table contains a mapping for a virtual page to either the real memory address at which the page is stored, or an indicator that the page is currently held in a disk file. (Although most do, some systems may not support use of a disk file for virtual memory.)
Systems can have one page table for the whole system or a separate page table for each application. If there is only one, different applications which are running at the same time share a single virtual address space, i.e. they use different parts of a single range of virtual addresses. Systems which use multiple page tables provide multiple virtual address spaces - concurrent applications think they are using the same range of virtual addresses, but their separate page tables redirect to different real addresses.
Dynamic address translation
If, while executing an instruction, a CPU fetches an instruction located at a particular virtual address, or fetches data from a specific virtual address or stores data to a particular virtual address, the virtual address must be translated to the corresponding physical address. This is done by a hardware component, sometimes called a memory management unit, which looks up the real address (from the page table) corresponding to a virtual address and passes the real address to the parts of the CPU which execute instructions. If the page tables indicate that the virtual memory page is not currently in real memory, the hardware raises a page fault exception (special internal signal) which invokes the paging supervisor component of the operating system (see below).
Paging supervisor
This part of the operating system creates and manages the page tables. If the dynamic address translation hardware raises a page fault exception, the paging supervisor searches the page space on secondary storage for the page containing the required virtual address, reads it into real physical memory, updates the page tables to reflect the new location of the virtual address and finally tells the dynamic address translation mechanism to start the search again. Usually all of the real physical memory is already in use and the paging supervisor must first save an area of real physical memory to disk and update the page table to say that the associated virtual addresses are no longer in real physical memory but saved on disk. Paging supervisors generally save and overwrite areas of real physical memory which have been least recently used, because these are probably the areas which are used least often. So every time the dynamic address translation hardware matches a virtual address with a real physical memory address, it must put a time-stamp in the page table entry for that virtual address.
Permanently resident pages
All virtual memory systems have memory areas that are "pinned down", i.e. cannot be swapped out to secondary storage, for example:
-
Interrupt mechanisms generally rely on an array of pointers to the handlers for various types of interrupt (I/O completion, timer event, program error, page fault, etc.). If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become even more complex and time-consuming; and it would be especially difficult in the case of page fault interrupts.
-
The page tables are usually not pageable.
-
Data buffers that are accessed outside of the CPU, for example by peripheral devices that use direct memory access (DMA) or by I/O channels. Usually such devices and the buses (connection paths) to which they are attached use physical memory addresses rather than virtual memory addresses. Even on buses with an IOMMU, which is a special memory management unit that can translate virtual addresses used on an I/O bus to physical addresses, the transfer cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. So pages containing locations to which or from which a peripheral device is transferring data are either permanently pinned down or pinned down while the transfer is in progress.
-
Timing-dependent kernel/application areas cannot tolerate the varying response time caused by paging.
Virtual Private Network
A virtual private network (VPN) is a computer network that is implemented in an additional software layer (overlay) on top of an existing larger network for the purpose of creating a private scope of computer communications or providing a secure extension of a private network into an insecure network such as the Internet.
The links between nodes of a virtual private network are formed over logical connections or virtual circuits between hosts of the larger network. The Link Layer protocols of the virtual network are said to be tunneled through the underlying transport network.
One common application is to secure communications through the public Internet, but a VPN does not need to have explicit security features such as authentication or traffic encryption. For example, VPNs can also be used to separate the traffic of different user communities over an underlying network with strong security features, or to provide access to a network via customized or private routing mechanisms.
VPNs are often installed by organizations to provide remote access to a secure organizational network. Generally, a VPN has a network topology more complex than a point-to-point connection. VPNs are also used to mask the IP address of individual computers within the Internet in order, for instance, to surf the World Wide Web anonymously or to access location restricted services, such as Internet television.
Vulnerability
In computer security, the term vulnerability is a weakness which allows an attacker to reduce a system's Information Assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw[1]. To be vulnerable, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface.
A security risk may be classified as a vulnerability. A vulnerability with one or more known instances of working and fully-implemented attacks is classified as an exploit. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled.
Constructs in programming languages that are difficult to use properly can be a large source of vulnerabilities.
-
Complexity: Large, complex systems increase the probability of flaws and unintended access points
-
Familiarity: Using common, well-known code, software, operating systems, and/or hardware increases the probability an attacker has or can find the knowledge and tools to exploit the flaw
-
Connectivity: More physical connections, privileges, ports, protocols, and services and time each of those are accessible increase vulnerability
-
Password management flaws: The computer user uses weak passwords that could be discovered by brute force. The computer user stores the password on the computer where a program can access it. Users re-use passwords between many programs and websites.
-
Fundamental operating system design flaws: The operating system designer chooses to enforce sub optimal policies on user/program management. For example operating systems with policies such as default permit grant every program and every user full access to the entire computer. This operating system flaw allows viruses and malware to execute commands on behalf of the administrator. [1]
-
Internet Website Browsing: Some internet websites may contain harmful Spyware or Adware that can be installed automatically on the computer systems. After visiting those websites, the computer systems become infected and personal information will be collected and passed on to third party individuals.
-
Software bugs: The programmer leaves an exploitable bug in a software program. The software bug may allow an attacker to misuse an application.
-
Unchecked user input: The program assumes that all user input is safe. Programs that do not check user input can allow unintended direct execution of commands or SQL statements (known as Buffer overflows, SQL injection or other non-validated inputs).
Wide Area Network (WAN)
A wide area network (WAN) is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries [1]). This is in contrast with personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city) respectively.
WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Leased lines can be very expensive. Instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.
Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation.
Performance improvements are sometimes delivered via WAFS or WAN optimization.
Several options are available for WAN connectivity:[2]
Option:
|
Description
|
Advantages
|
Disadvantages
|
Bandwidth range
|
Sample protocols used
|
Leased line
|
Point-to-Point connection between two computers or Local Area Networks (LANs)
|
Most secure
|
Expensive
|
|
PPP, HDLC, SDLC, HNAS
|
Circuit switching
|
A dedicated circuit path is created between end points. Best example is dialup connections
|
Less Expensive
|
Call Setup
|
28 - 144 kbps
|
PPP, ISDN
|
Packet switching
|
Devices transport packets via a shared single point-to-point or point-to-multipoint link across a carrier internetwork. Variable length packets are transmitted over Permanent Virtual Circuits (PVC) or Switched Virtual Circuits (SVC)
|
|
Shared media across link
|
|
X.25 Frame-Relay
|
Cell relay
|
Similar to packet switching, but uses fixed length cells instead of variable length packets. Data is divided into fixed-length cells and then transported across virtual circuits
|
Best for simultaneous use of voice and data
|
Overhead can be considerable
|
|
ATM
|
Transmission rate usually range from 1200 bps to 6 Mbps, although some connections such as ATM and Leased lines can reach speeds greater than 156 Mbps. Typical communication links used in WANs are telephone lines, microwave links & satellite channels.
Recently with the proliferation of low cost of Internet connectivity many companies and organizations have turned to VPN to interconnect their networks, creating a WAN in that way. Companies such as Cisco, New Edge Networks and Check Point offer solutions to create VPN networks.
Web Administrator
A webmaster (portmanteau of web and postmaster), also called a web architect, web developer, site author, website administrator, or (informally) webmeister, is a person responsible for maintaining a website(s). The duties of the webmaster may include ensuring that the web servers, hardware and software are operating accurately, designing the website, generating and revising web pages, replying to user comments, and examining traffic through the site.
Webmasters may be generalists with HTML expertise who manage most or all aspects of Web operations. Depending on the nature of the websites they manage, webmasters typically know scripting languages such as PHP, Perl and Javascript. They may also be required to know how to configure web servers such as Apache and serve as the server administrator.
An alternative definition of webmaster is a businessperson who uses online media to sell products and/or services. This broader definition of webmaster covers not just the technical aspects of overseeing Web site construction and maintenance but also management of content, advertising, marketing and order fulfilment for the Web site.[1]
Core responsibilities of the webmaster may include the regulation and management of access rights of different users of a website, the appearance and setting up website navigation. Content placement can be part of a webmaster's responsibilities, while content creation may not be.
Workstation
A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.
Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking cability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.
Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.
Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com
Share with your friends: |