Features
Fast performance with a rapid response time is critical. Businesses cannot afford to have customers waiting for a TPS to respond, the turnaround time from the input of the transaction to the production for the output must be a few seconds or less.
Reliability
Many organizations rely heavily on their TPS; a breakdown will disrupt operations or even stop the business. For a TPS to be effective its failure rate must be very low. If a TPS does fail, then quick and accurate recovery must be possible. This makes well–designed backup and recovery procedures essential.
Inflexibility
A TPS wants every transaction to be processed in the same way regardless of the user, the customer or the time for day. If a TPS were flexible, there would be too many opportunities for non-standard operations, for example, a commercial airline needs to consistently accept airline reservations from a range of travel agents, accepting different transactions data from different travel agents would be a problem.
Controlled processing
The processing in a TPS must support an organization's operations. For example if an organization allocates roles and responsibilities to particular employees, then the TPS should enforce and maintain this requirement.
Operating Risk
Definition
The Basel Committee defines operational risk as:
"The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events."
However, the Basel Committee recognizes that operational risk is a term that has a variety of meanings and therefore, for internal purposes, banks are permitted to adopt their own definitions of operational risk, provided the minimum elements in the Committee's definition are included.
Scope exclusions
The Basel II definition of operational risk excludes, for example, strategic risk - the risk of a loss arising from a poor strategic business decision.
Other risk terms are seen as potential consequences of operational risk events. For example, reputational risk (damage to an organization through loss of its reputation or standing) can arise as a consequence (or impact) of operational failures - as well as from other events.
Basel II event type categories
The following lists the official Basel II defined event types with some examples for each category:
-
Internal Fraud - misappropriation of assets, tax evasion, intentional mismarking of positions, bribery
-
External Fraud- theft of information, hacking damage, third-party theft and forgery
-
Employment Practices and Workplace Safety - discrimination, workers compensation, employee health and safety
-
Clients, Products, & Business Practice- market manipulation, antitrust, improper trade, product defects, fiduciary breaches, account churning
-
Damage to Physical Assets - natural disasters, terrorism, vandalism
-
Business Disruption & Systems Failures - utility disruptions, software failures, hardware failures
-
Execution, Delivery, & Process Management - data entry errors, accounting errors, failed mandatory reporting, negligent loss of client assets
Difficulties
It is relatively straightforward for an organization to set and observe specific, measurable levels of market risk and credit risk. By contrast it is relatively difficult to identify or assess levels of operational risk and its many sources. Historically organizations have accepted operational risk as an unavoidable cost of doing business.
Methods of operational risk management
Basel II and various Supervisory bodies of the countries have prescribed various soundness standards for Operational Risk Management for Banks and similar Financial Institutions. To complement these standards, Basel II has given guidance to 3 broad methods of Capital calculation for Operational Risk
-
Basic Indicator Approach - based on annual revenue of the Financial Institution
-
Standardized Approach - based on annual revenue of each of the broad business lines of the Financial Institution
-
Advanced Measurement Approaches - based on the internally developed risk measurement framework of the bank adhering to the standards prescribed (methods include IMA, LDA, Scenario-based, Scorecard etc.)
The Operational Risk Management framework should include identification, measurement, monitoring, reporting, control and mitigation frameworks for Operational Risk.
Operating System
An operating system (OS) is an interface between hardware and user which is responsible for the management and coordination of activities and the sharing of the resources of the computer that acts as a host for computing applications run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type.[1] Some of the oldest models may, however, use an embedded operating system that may be contained on a data storage device.
Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface (SUI) like typing commands by using command line interface (CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. (Whether the user interface should be included as part of the operating system is a point of contention.)
Common contemporary operating systems include BSD, Darwin (Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7). While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems,[2][3] although the Microsoft Windows line of operating systems has almost 90% of the client PC market.
Packet
A packet consists of two kinds of data: control information and user data (also known as payload). The control information provides data the network needs to deliver the user data, for example: source and destination addresses, error detection codes like checksums, and sequencing information. Typically, control information is found in packet headers and trailers, with user data in between.
Different communications protocols use different conventions for distinguishing between the elements and for formatting the data. In Binary Synchronous Transmission, the packet is formatted in 8-bit bytes, and special characters are used to delimit the different elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level.
A good analogy is to consider a packet to be like a letter: the header is like the envelope, and the data area is whatever the person puts inside the envelope. A difference, however, is that some networks can break a larger packet into smaller packets when necessary (note that these smaller data elements are still formatted as packets).
A network design can achieve two major results by using packets: error detection and multiple host addressing.
Error detection
It is more efficient and reliable to calculate a checksum or cyclic redundancy check over the contents of a packet than to check errors using character-by-character parity bit checking.
The packet trailer often contains error checking data to detect errors that occur during transmission.
Host addressing
Modern networks usually connect three or more host computers together; in such cases the packet header generally contains addressing information so that the packet is received by the correct host computer. In complex networks constructed of multiple routing and switching nodes, like the ARPANET and the modern Internet, a series of packets sent from one host computer to another may follow different routes to reach the same destination. This technology is called packet switching.
Packet Filtering
Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. In addition, it is susceptible to IP spoofing.
Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet doesn’t match the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source).
This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number).
Parallel Processing
The simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads. Ideally, parallel processing makes programs run faster because there are more engines (CPUs or cores) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs or cores can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several, and multi-core processor chips are becoming the norm. There are even computers with thousands of CPUs.
With single-CPU, single-core computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software.
Note that parallel processing differs from multitasking, in which a CPU provides the illusion of simultaneously executing instructions from multiple different programs by rapidly switching between them, or "interleaving" their instructions.
Parallel processing is also called parallel computing. In the quest of cheaper computing alternatives parallel processing provides a viable option. The idle time of processor cycles across network can be used effectively by sophisticated distributed computing software.
Permanent File
The information stored in a permanent file is not altered or deleted. An example would be the U.S. Government’s file of social security numbers and associated receipts and payments.
Phishing
In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity in an electronic communication. Communications purporting to be from popular social web sites, auction sites, online payment processors or IT administrators are commonly used to lure the unsuspecting public. Phishing is typically carried out by e-mail or instant messaging,[1] and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Even when using server authentication, it may require tremendous skill to detect that the website is fake. Phishing is an example of social engineering techniques used to fool users,[2] and exploits the poor usability of current web security technologies.[3] Attempts to deal with the growing number of reported phishing incidents include legislation, user training, public awareness, and technical security measures.
A phishing technique was described in detail in 1987, and the first recorded use of the term "phishing" was made in 1996. The term is a variant of fishing,[4] probably influenced by phreaking[5][6] or password harvesting fishing, and alludes to baits used to "catch" financial information and passwords.
Physical Access Controls
Underground entrance to the New York City Subway system
Physical access by a person may be allowed depending on payment, authorization, etc. Also there may be one-way traffic of people. These can be enforced by personnel such as a border guard, a doorman, a ticket checker, etc., or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.
In physical security, the term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the Access control vestibule. Within these environments, physical key management may also be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets.
Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically this was partially accomplished through keys and locks. When a door is locked only someone with a key can enter through the door depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.
Electronic access control uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked.
Access control system operation
When a credential is presented to a reader, the reader sends the credential’s information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on the access control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the door. The control panel also ignores a door open signal to prevent an alarm. Often the reader provides feedback, such as a flashing red LED for an access denied and a flashing green LED for an access granted.
The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to the server room but Bob does not. Alice either gives Bob her credential or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted. The second factor can be a PIN, a second credential, operator intervention, or a biometric input. Often the factors are characterized as
-
something you have, such as an access badge or passcard,
-
something you know, e.g. a PIN, or password.
-
something you are, typically a biometric input.
Credential
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being, that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something you know (such as number or PIN), something you have (such as an access badge), something you are (such as a biometric feature) or some combination of these items. The typical credential is an access card, key fob, or other key. There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26 bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs which are more compact than ID cards and attach to a key ring. Typical biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan, voice, and hand geometry.
Credentials for an access control system are typically held within a database, which stores access credentials for all staff members of a given firm or organisation. Assigning access control credentials can be derived from the basic tenet of access control, i.e. who has access to a given area, why the person should have access to the given area, and where given persons should have access to. As an example, in a given firm, senior management figures may need general access to all areas of an organisation. ICT staff may need primary access to computer software, hardware and general computer-based information systems. Janitors and maintenance staff may need chief access to service areas, cleaning closets, electrical and heating apparatus, etc.
Access control system components
An access control point, which can be a door, turnstile, parking gate, elevator, or other physical barrier where granting access can be electrically controlled. Typically the access point is a door. An electronic access control door can contain several elements. At its most basic there is a stand-alone electric lock. The lock is unlocked by an operator with a switch. To automate this, operator intervention is replaced by a reader. The reader could be a keypad where a code is entered, it could be a card reader, or it could be a biometric reader. Readers do not usually make an access decision but send a card number to an access control panel that verifies the number against an access list. To monitor the door position a magnetic door switch is used. In concept the door switch is not unlike those on refrigerators or car doors. Generally only entry is controlled and exit is uncontrolled. In cases where exit is also controlled a second reader is used on the opposite side of the door. In cases where exit is not controlled, free exit, a device called a request-to-exit (REX) is used. Request-to-exit devices can be a pushbutton or a motion detector. When the button is pushed or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the door.
Access control topology
Typical access control door wiring
Access control door wiring when using intelligent readers
Access control decisions are made by comparing the credential to an access control list. This lookup can be done by a host or server, by an access control panel, or by a reader. The development of access control systems has seen a steady push of the lookup out from a central host to the edge of the system, or the reader. The predominate topology circa 2009 is hub and spoke with a control panel as the hub and the readers as the spokes. The lookup and control functions are by the control panel. The spokes communicate through a serial connection; usually RS485. Some manufactures are pushing the decision making to the edge by placing a controller at the door. The controllers are IP enabled and connect to a host and database using standard networks.
Types of readers
Access control readers may be classified by functions they are able to perform:
-
Basic (non-intelligent) readers: simply read card number or PIN and forward it to a control panel. In case of biometric identification, such readers output ID number of a user. Typically Wiegand protocol is used for transmitting data to the control panel, but other options such as RS-232, RS-485 and Clock/Data are not uncommon. This is the most popular type of access control readers. Examples of such readers are RF Tiny by RFLOGICS, ProxPoint by HID, and P300 by Farpointe Data.
-
Semi-intelligent readers: have all inputs and outputs necessary to control door hardware (lock, door contact, exit button), but do not make any access decisions. When a user presents a card or enters PIN, the reader sends information to the main controller and waits for its response. If the connection to the main controller is interrupted, such readers stop working or function in a degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus. Examples of such readers are InfoProx Lite IPL200 by CEM Systems and AP-510 by Apollo.
-
Intelligent readers: have all inputs and outputs necessary to control door hardware, they also have memory and processing power necessary to make access decisions independently. Same as semi-intelligent readers they are connected to a control panel via an RS-485 bus. The control panel sends configuration updates and retrieves events from the readers. Examples of such readers could be InfoProx IPO200 by CEM Systems and AP-500 by Apollo. There is also a new generation of intelligent readers referred to as "IP readers". Systems with IP readers usually do not have traditional control panels and readers communicate directly to PC that acts as a host. Examples of such readers are PowerNet IP Reader by Isonas Security Systems, ID08 by Solus has the built in webservice to make it user friendly, Edge ER40 reader by HID Global, LogLock and UNiLOCK by ASPiSYS Ltd, and BioEntry Plus reader by Suprema Inc.
Some readers may have additional features such as LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support.
Point-of-Sale System (POS)
Point of sales (POS) or checkout is both a checkout counter in a shop, and the location where a transaction occurs. Colloquially, a "checkout" refers to a POS terminal or more generally to the hardware and software used for checkouts, the equivalent of an electronic cash register. A POS terminal manages the selling process by a salesperson accessible interface. The same system allows the creation and printing of the voucher.
Hospitality industry
Hospitality point of sales systems are computerized systems incorporating registers, computers and peripheral equipment, usually on a computer network. Like other point of sale systems, these systems keep track of sales, labor and payroll, and can generate records used in accounting and book keeping. They may be accessed remotely by restaurant corporate offices, troubleshooters and other authorized parties.
Point of sales systems have revolutionized the restaurant industry, particularly in the fast food sector. In the most recent technologies, registers are computers, sometimes with touch screens. The registers connect to a server, often referred to as a "store controller" or a "central control unit." Printers and monitors are also found on the network. Additionally, remote servers can connect to store networks and monitor sales and other store data.
The efficiency of such systems have decreased service times and increased efficiency of orders.
Another innovation in technology for the restaurant industry is Wireless POS. Many restaurants with high volume use wireless handheld POS to collect orders which are sent to a server. The server sends required information to the kitchen in real time.
Restaurant business
Restaurant POS refers to point of sale (POS) software that runs on computers, usually touchscreen terminals or wireless handheld devices. Restaurant POS systems assist businesses to track transactions in real time.
Typical restaurant POS software is able to print guest checks, print orders to kitchens and bars for preparation, process credit cards and other payment cards, and run reports. In addition, some systems implement wireless pagers and electronic signature capture devices.
In the fast food industry, registers may be at the front counter, or configured for drive through or walk through cashiering and order taking. Front counter registers take and serve orders at the same terminal, while drive through registers allow orders to be taken at one or more drive through windows, to be cashiered and served at another. In addition to registers, drive through and kitchen monitors may be used by store personnel to view orders. Once orders appear they may be deleted or recalled by "bump bars", small boxes which have different buttons for different uses. Drive through systems are often enhanced by the use of drive through wireless (or headset) systems which enable communications with drive through speakers.
POS systems are often designed for a variety of clients, and can be programmed by the end users to suit their needs. Some large clients write their own specifications for vendors to implement. In some cases, POS systems are sold and supported by third party distributors, while in other cases they are sold and supported directly by the vendor.
Wireless systems consist of drive though microphones and speakers (often one speaker will serve both purposes), which are wired to a "base station" or "center module." This will, in turn broadcast to headsets. Headsets may be an all-in-one headset or one connected to a belt pack.
Hotel business
POS software allows for transfer of meal charges from dining room to guest room with a button or two. It may also need to be integrated with property management software.
Primary Storage
Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
-
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.
-
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the CPU via a memory bus. It is actually comprised of two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Private Network
In Internet Protocol terminology, a private network is typically a network that uses private IP address space, following the standards set by RFC 1918 and RFC 4193. These addresses are common in home and office local area networks (LANs), as globally routable addresses are scarce, expensive to obtain, or their use is not necessary. Private IP address spaces were originally defined in efforts to delay IPv4 address exhaustion, but they are also a feature of the next generation Internet Protocol, IPv6.
These addresses are private because they are not globally delegated, meaning they aren't allocated to a specific organization. Anyone can use these addresses without approval from a regional Internet registry (RIR). Consequently, they are not routable within the public Internet. If such a private network needs to connect to the Internet, it must use either a network address translator (NAT) gateway, or a proxy server.
The most common use of these addresses is in home networks, since most Internet service providers (ISPs) only allocate a single IP address to each residential customer, but many homes have more than one networked device (for example, several computers and a printer). In this situation, a NAT gateway is usually used to enable Internet connectivity to multiple hosts. They are also commonly used in corporate networks, which for security reasons, are not connected directly to the Internet. Often a proxy, SOCKS gateway, or similar devices, are used to provide restricted Internet access to internal users. In both cases, private addresses are seen as adding security to the internal network, since it's impossible for an Internet host to connect directly to an internal system.
Because many private networks use the same private IP address space, a common problem occurs when merging such networks: the collision of address space, resulting in duplication of addresses on multiple devices. In this case, networks must renumber, often a difficult and time-consuming task, or a NAT router must be placed between the networks to masquerade the duplicated addresses.
It is not uncommon for packets originating in private address spaces to leak onto the Internet. Poorly configured private networks often attempt reverse DNS lookups for these addresses, causing extra load on the Internet root nameservers. The AS112 project attempted to mitigate this load by providing special "blackhole" anycast nameservers for private addresses which only return "not found" answers for these queries. Organizational edge routers are usually configured to drop ingress IP traffic for these networks, which can occur either by accident, or from malicious traffic using a spoofed source address. Less commonly, ISP edge routers will drop such egress traffic from customers, which reduces the impact to the Internet of such misconfigured or malicious hosts on the customer's network.
Processing Controls
In business and accounting, Information technology controls (or IT controls) are specific activities performed by persons or systems designed to ensure that business objectives are met. They are a subset of an enterprise's internal control. IT control objectives relate to the confidentiality, integrity, and availability of data and the overall management of the IT function of the business enterprise. IT controls are often described in two categories: IT general controls ITGC and IT application controls. ITGC include controls over the information technology (IT) environment, computer operations, access to programs and data, program development and program changes. IT application controls refer to transaction processing controls, sometimes called "input-processing-output" controls. Information technology controls have been given increased prominence in corporations listed in the United States by the Sarbanes-Oxley Act. The COBIT Framework (Control Objectives for Information Technology) is a widely-used framework promulgated by the IT Governance Institute, which defines a variety of ITGC and application control objectives and recommended evaluation approaches. IT departments in organizations are often led by a Chief Information Officer (CIO), who is responsible for ensuring effective information technology controls are utilized.
Share with your friends: |