Fundamentals, basic concept of computer



Download 92.33 Kb.
Date29.07.2017
Size92.33 Kb.

CHAPTER # 01 FUNDAMENTALS, BASIC CONCEPT OF COMPUTER




FUNDAMENTALS, BASIC CONCEPT OF COMPUTER





COMPUTER
Definition

A computer is a machine that can be programmed to manipulate symbols. Its principal characteristics are:

  • It responds to a specific set of instructions in a well-defined manner.

  • It can execute a prerecorded list of instructions (a program).

  • It can quickly store and retrieve large amounts of data.

OR

A computer is a programmable machine designed to automatically carry out a sequence of arithmetic or logical operations.

OR

A computer is an electronic device that can transmit, store and manipulate information or data which can be numeric or character type.
History of Computers

"Computer" was originally a job title; it was used to describe those human beings (predominantly women) whose job it was to perform the repetitive calculations required to compute such things as navigational tables, tide charts, and planetary positions for astronomical almanacs.

The abacus was an early aid for mathematical computations. Its only value is that it aids the memory of the human performing the calculation. A skilled abacus operator can work on addition and subtraction problems at the speed of a person equipped with a hand calculator (multiplication and division are slower). The abacus is often wrongly attributed to China. In fact, the oldest surviving abacus was used in 300 B.C. by the Babylonians. The abacus is still in use today, principally in the far east. A modern abacus consists of rings that slide over rods, but the older one pictured below dates from the time when pebbles were used for counting (the word "calculus" comes from the Latin word for pebble).
http://www.computersciencelab.com/computerhistory/htmlhelp/images2/abacus3.jpghttp://www.computersciencelab.com/computerhistory/htmlhelp/images2/abacus2.jpg

A very old abacus A more modern abacus.

Note how the abacus is really just a representation of the human fingers: the 5 lower rings on each rod represent the 5 fingers and the 2 upper rings represent the 2 hands.

Logarithm Invention: In 1617 an eccentric (some say mad) Scotsman named John Napier invented logarithms, which are a technology that allows multiplication to be performed via addition. The magic ingredient is the logarithm of each operand, which was originally obtained from a printed table. But Napier also invented an alternative to tables, where the logarithm values were carved on ivory sticks which are now called Napier's Bones. http://www.computersciencelab.com/computerhistory/htmlhelp/images2/napiersbones.gif

Napier's invention led directly to the slide rule, first built in England in 1632 and still in use in the 1960's by the NASA engineers of the Mercury, Gemini, and Apollo programs which landed men on the moon. An original set of Napier's Bones http://www.computersciencelab.com/computerhistory/htmlhelp/images2/napiersbones2.gifhttp://www.computersciencelab.com/computerhistory/htmlhelp/images2/sliderule.jpg


A slide rule more modern set of Napier's Bones

Gear-driven calculating machine invention:

Leonardo da Vinci (1452-1519) made drawings of gear-driven calculating machines but apparently never built any. http://www.computersciencelab.com/computerhistory/htmlhelp/images2/calculatingclock.gif

The first gear-driven calculating machine to actually be built was probably the calculating clock, so named by its inventor, the German professor Wilhelm Schickard in 1623. This device got little publicity because Schickard died soon afterward in the bubonic plague.

Pascaline Invention:

In 1642 Blaise Pascal, at age 19, invented the Pascaline as an aid for his father who was a tax collector. Pascal built 50 of this gear-driven one-function calculator (it could only add) but couldn't sell many because of their exorbitant cost and because they really weren't that accurate



Stepped reckoner invention:

Just a few years after Pascal, the German Gottfried Wilhelm Leibniz (co-inventor with Newton of calculus) managed to build a four-function (addition, subtraction, multiplication, and division) calculator that he called the stepped reckoner because, instead of gears, it employed fluted drums having ten flutes arranged around their circumference in a stair-step fashion. Although the stepped reckoner employed the decimal number system (each drum had 10 flutes), Leibniz was the first to advocate use of the binary number system which is fundamental to the operation of modern computers. Leibniz is considered one of the greatest of the philosophers but he died poor and alone. Leibniz's Stepped Reckoner http://www.computersciencelab.com/computerhistory/htmlhelp/images2/steppedreckoner.gif



Punched Card invention:

In 1801 the Frenchman Joseph Marie Jacquard invented a power loom that could base its weave (and hence the design on the fabric) upon a pattern automatically read from punched wooden cards, held together in a long row by rope. Descendents of these punched cards have been in use ever since. Jacquard's technology was a real boon to mill owners, but put many loom operators out of work. Angry mobs smashed Jacquard looms and once attacked Jacquard himself. History is full of examples of labor unrest following technological innovation yet most studies show that, overall, technology has actually increased the number of jobs. `http://www.computersciencelab.com/computerhistory/htmlhelp/images2/jacquardcard.gif



A close-up of a Jacquard card

By 1822 the English mathematician Charles Babbage was proposing a steam driven calculating machine the size of a room, which he called the Difference Engine. This machine would be able to compute tables of numbers, such as logarithm tables. He obtained government funding for this project due to the importance of numeric tables in ocean navigation. By promoting their commercial and military navies, the British government had managed to become the earth's greatest empire. But in that time frame the British government was publishing a seven volume set of navigation tables which came with a companion volume of corrections which showed that the set had over 1000 numerical errors. It was hoped that Babbage's machine could eliminate errors in these types of tables. But construction of Babbage's Difference Engine proved exceedingly difficult and the project soon became the most expensive government funded project up to that point in English history. Ten years later the device was still nowhere near complete, acrimony abounded between all involved, and funding dried up. The device was never finished. Babbage was not deterred, and by then was on to his next brainstorm, which he called the Analytic Engine. This device, large as a house and powered by 6 steam engines, would be more general purpose in nature because it would be programmable, thanks to the punched card technology of Jacquard. But it was Babbage who made an important intellectual leap regarding the punched cards. http://www.computersciencelab.com/computerhistory/htmlhelp/images2/differenceengine.jpg

Furthermore, Babbage realized that punched paper could be employed as a storage mechanism, holding computed numbers for future reference. Because of the connection to the Jacquard loom, Babbage called the two main parts of his Analytic Engine the "Store" and the "Mill", as both terms are used in the weaving industry. The Store was where numbers were held and the Mill was where they were "woven" into new results. In a modern computer these same parts are called the memory unit and the central processing unit (CPU). The Analytic Engine also had a key function that distinguishes computers from calculators: the conditional statement. A conditional statement allows a program to achieve different results each time it is run.

IBM invention:

The next breakthrough occurred in America. Hollerith's invention, known as the Hollerith desk, consisted of a card reader which sensed the holes in the cards, a gear driven mechanism which could count (using Pascal's mechanism which we still see in car odometers), and a large wall of dial indicators (a car speedometer is a dial indicator) to display the results of the count. http://www.computersciencelab.com/computerhistory/htmlhelp/images2/hollerithoperator.jpg

Hollerith's technique was successful and the 1890 census was completed in only 3 years at a savings of 5 million dollars. Hollerith built a company, the Tabulating Machine Company which, after a few buyouts, eventually became International Business Machines, known today as IBM. IBM grew rapidly and punched cards became ubiquitous. Your gas bill would arrive each month with a punch card you had to return with your payment. This punch card recorded the particulars of your account: your name, address, gas usage, etc. (I imagine there were some "hackers" in these days who would alter the punch cards to change their bill).

IBM continued to develop mechanical calculators for sale to businesses to help with financial accounting and inventory accounting. One characteristic of both financial accounting and inventory accounting is that although you need to subtract, you don't need negative numbers and you really don't have to multiply since multiplication can be accomplished via repeated addition. But the U.S. military desired a mechanical calculator more optimized for scientific computation. One early success was the Harvard Mark I computer which was built as a partnership between Harvard and IBM in 1944. This was the first programmable digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years, sounding like a roomful of ladies knitting. To appreciate the scale of this machine note the four typewriters in the foreground of the following photo. http://www.computersciencelab.com/computerhistory/htmlhelp/images2/mark1.jpg



Turing Machine:

Alan Turing of Cambridge university presented his idea(1936) of a theoretically simplified but fully capable computer, now known as the Turing machine. The concept of this machine, which could theoretically perform any mathematical computation, was very important in the future development of the computer. Another contribution by Alan Turing is the Turing test proposed to determine if a computer has the ability to think. So far no one has built a computer that can pass that test, there is cash prize of US$ 100,000.



GENERATIONS OF COMPUTER:

Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. The history of computer development is often referred to in reference to the different generations of computing devices. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices.



  1. First generation (1940-1956):

The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts. The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951. The examples are;

  • ENIAC (1946):

The title of forefather of today's all-electronic digital computers is usually awarded to ENIAC, which stood for Electronic Numerical Integrator and Calculator. ENIAC was built at the University of Pennsylvania between 1943 and 1945 by two professors, John Mauchly and the 24 year old J. Presper Eckert, who got funding from the war department after promising they could build a machine that would replace all the "computers". ENIAC filled a 20 by 40 foot room, weighed 30 tons, and used more than 18,000 vacuum tubes. Once ENIAC was finished and proved worthy of the cost of its development, its designers set about to eliminate the obnoxious fact that reprogramming the computer required a physical modification of all the patch cords and switches. ENIAC http://www.computersciencelab.com/computerhistory/htmlhelp/images2/eniac1.jpg

  • EDVAC (1948):

It took days to change ENIAC's program. Eckert and Mauchly's next teamed up with the mathematician John von Neumann to design EDVAC, which pioneered the stored program. It is electronic variable automatic computer, which incorporate a program stored entirely within its memory.

  • Floppy disk (1950):

It was invented by Yoshiro nakamats. It provided faster access to programs and data as compared with magnetic tape.

  • Compiler (1951):

Grace hopper of US navy develops the very first high level language compiler. Before the invention of this compiler, developing a computer program was tedious and prone to errors. A compiler translates high level language (that is easy to understand for humans) into a language that the computer can understand.

Other types (1951):

After ENIAC and EDVAC came other computers with humorous names such as ILLIAC, JOHNNIAC, and, of course, MANIAC. ILLIAC was built at the University of Illinois at Champaign-Urbana,



By the end of the 1950's computers were no longer one-of-a-kind hand built devices owned only by universities and government research labs. Eckert and Mauchly left the University of Pennsylvania over a dispute about who owned the patents for their invention. They decided to set up their own company. Their first product was the famous UNIVAC computer, the first commercial (that is, mass produced) computer. The first UNIVAC was sold, appropriately enough, to the Census bureau. UNIVAC was also the first computer to employ magnetic tape.

Advantages:

  • Fastest calculation device

  • Control electronic signal for calculation

Limitations:

  • Too bulky size

  • Unreliable

  • Thousand of vacuum tube were used

  • Air condition system is required

  • Regular hardware failure

  • Constant maintainance required

  • Not portable

  • Manual assembly

  • Commercial production was difficult & Limited commercial use
  1. Second Generation (1956-1963):


Transistors(1947) replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 by shockly, bardeen and brattain in the bell labs USA but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.

Advantages:

  • Small in size

  • More reliable

  • Less computation time

  • Less heat generator

  • Less prone to hardware failure

  • Better portability

  • Wide commercial usage

Limitations:

  • Air production system is required

  • Manual assembly

  • Commercial production was difficult and costly
  1. Third Generation (1964-1971):


The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors. The examples are;

  • BASIC (1965):

It is developed by Thomas Kurtz and john kemeny at Dartmouth college. It has all purpose symbolic instruction code.

  • Computer Mouse (1965):

It was developed by Douglas Englebart. It did not become popular until 1983, when apple computers adopted the concept.

  • ARPANET (1969):

It is a network of networks, which is a grand daddy of the today’s global internet. It is a network of around 60,000 computers developed by the US department of defense to facilitate communications between research organizations and universities.

Advantages:

  • Smaller in size as compared to previous generations

  • More reliable and less heat generation

  • Easly portable

  • Maintainance cost is low

  • Less power requirement

  • Totally general purpose

  • Commercial production was easy and cheap

Limitations:

  • Air condition system is required

  • Highly sophisticated technology required for manufacturing the IC’s chips
  1. Fourth Generation (1971-Present):


This transformation was a result of the invention of the microprocessor. A microprocessor (uP) is a computer that is fabricated on an integrated circuit (IC). The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. Computers had been around for 20 years before the first microprocessor was developed at Intel in 1971. The micro in the name microprocessor refers to the physical size. Intel didn't invent the electronic computer. But they were the first to succeed in cramming an entire computer on a single chip (IC). Intel was started in 1968 and initially produced only semiconductor memory (Intel invented both the DRAM and the EPROM, two memory technologies that are still going strong today). In 1969 they were approached by Busicom, a Japanese manufacturer of high performance calculators (these were typewriter sized units, the first shirt-pocket sized scientific calculator was the Hewlett-Packard HP35 introduced in 1972). Busicom wanted Intel to produce 12 custom calculator chips: one chip dedicated to the keyboard, another chip dedicated to the display, another for the printer, etc. But integrated circuits were (and are) expensive to design and this approach would have required Busicom to bear the full expense of developing 12 new chips since these 12 chips would only be of use to them. The examples are; http://www.computersciencelab.com/computerhistory/htmlhelp/images2/busicom.jpg

A typical Busicom desk calculator

  • Intel (1971):

The Intel 4004, the first microprocessor (uP). The 4004 consisted of 2300 transistors and was clocked at 108 kHz (i.e., 108,000 times per second). Compare this to the 42 million transistors and the 2 GHz clock rate (i.e., 2,000,000,000 times per second) used in a Pentium 4. One of Intel's 4004 chips still functions aboard the Pioneer 10 spacecraft, which is now the man-made object farthest from the earth. Curiously, Busicom went bankrupt and never ended up using the ground-breaking microprocessor.

Intel followed the 4004 with the 8008 and 8080. Intel priced the 8080 microprocessor at $360 dollars as an insult to IBM's famous 360 mainframe which cost millions of dollars.




  • MITS Altair 8080 computer (1975):

The 8080 was employed in the MITS Altair computer, which was the world's first personal computer (PC). It was personal all right: you had to build it yourself from a kit of parts that arrived in the mail. This kit didn't even include an enclosure and that is the reason the unit shown below doesn't match the picture on the magazine cover.


  • Cray 1 (1976):

It was the first commercial supercomputer. Supercomputer are state of the art machines designed to perform calculations as fast as the current technology allows.


  • Home user computers:

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. A Harvard freshman by the name of Bill Gates decided to drop out of college so he could concentrate all his time writing programs for this computer. This early experienced put Bill Gates in the right place at the right time once IBM decided to standardize on the Intel microprocessors for their line of PCs in 1981. The Intel Pentium 4 used in today's PCs is still compatible with the Intel 8088 used in IBM's first PC. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.



Advantages:

  • Smaller in size as compared to previous generations

  • More reliable and less heat generation

  • Easly portable

  • Hardware failure is negligible

  • Much faster computational time (pico seconds)

  • Less power requirement

  • Totally general purpose

  • Manual labour required

  • Commercial production was easy and cheap

Limitations:

  • Highly sophisticated technology required for manufacturing the LSI chips

Fifth Generation (Present and Beyond) Artificial Intelligence


Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

Advantages:

  • Much faster

  • More intelligent (reasoning, learning etc)

  • Massive primary memory capabilities

  • Hardware continued to shrink in size

  • Vocabulary within computers are increasing “talking machines” came in the market.


EXPLANATION:

Computers can perform complex and repetitive procedures quickly, precisely and reliably. Modern computers are electronic and digital. The actual machinery (wires, transistors, and circuits) is called hardware; the instructions and data are called software. All general-purpose computers require the following hardware components:



  • Central processing unit (CPU): The heart of the computer, this is the component that actually executes instructions organized in programs ("software") which tell the computer what to do.

  • Memory (fast, expensive, short-term memory): Enables a computer to store, at least temporarily, data, programs, and intermediate results.

  • Mass storage device (slower, cheaper, long-term memory): Allows a computer to permanently retain large amounts of data and programs between jobs. Common mass storage devices include disk drives and tape drives.

  • Input device: Usually a keyboard and mouse, the input device is the conduit through which data and instructions enter a computer.

  • Output device: A display screen, printer, or other device that lets you see what the computer has accomplished.

In addition to these components, many others make it possible for the basic components to work together efficiently. For example, every computer requires a bus that transmits data from one part of the computer to another.










CLASSIFICATION OF COMPUTERS:

I). Based on the operational principle of computers, they are categorized as analog, digital and hybrid computers.

Operational Principle

  1. Analog

  2. Digital

  3. Hybrid






  1. Analog Computers: These are almost extinct today. These are different from a digital computer because an analog computer can perform several mathematical operations simultaneously. It uses continuous variables for mathematical operations and utilizes mechanical or electrical energy. Analog computer measures and answer the questions by the method of “HOW MUCH”. The input data is not a number infect a physical quantity like pressure, speed, velocity. Other characteristics are;eai-analog-computer-1964

  • Signals are continuous of (0 to 10 V)

  • Accuracy 1% Approximately

  • High speed

  • Output is continuous

  • Time is wasted in transmission time



  1. Digital Computers: They use digital circuits and are designed to operate on two states, namely bits 0 and 1. They are analogous to states ON and OFF. Data on these computers is represented as a series of 0s and 1s. Digital computers are suitable for complex computation and have higher processing speeds. They are programmable. Digital computers are either general purpose computers or special purpose ones. General purpose computers, as their name suggests, are designed for specific types of data processing while general purpose computers are meant for general use. Digital computer counts and answer the questions by the method of “HOW Many”. The input data is represented by a number. These are used for the logical and arithmetic operations. Logic is a science of reasoning which tells that which statements will be followed next and the statement must be true or false. There is no place for perhaps or if. Arithmetic means performing mathematical operations on data, such as addition, subtraction, multiplication etc.

Other characteristics are;

  • Signals are two level of (0 V or 5 V)

  • Accuracy unlimited

  • low speed sequential as well as parallel processing

  • Output is continuous but obtain when computation is completed.




  1. Hybrid Computers: These computers are a combination of both digital and analog computers. In this type of computers, the digital segments perform process control by conversion of analog signals to digital ones. They are special purpose computers, accurate like digital computers and fast like analog computers. They are used in hospitals, spaceships, guided missiles etc.



Classification of digital computers:

Processing Power

  1. Mainframe

  2. Microcomputers

  3. Minicomputers

  4. Supercomputers



Digital computes are classified on the basis of processing power into four sub-types, these are;





  1. Mainframe Computers:

Mainframe computers are those computers that offer faster processing (100 millions operation per second) and greater storage area. The word “main frame” comes from the metal frames. It is also known as Father computer. Mainframe was a term originally referring to the cabinet containing the central processor unit or "main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously.

The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines. Large organizations use mainframes for highly critical applications such as bulk data processing and ERP.

Most of the mainframe computers have capacities to host multiple operating systems and operate as a number of virtual machines. They can substitute for several small servers. Application – Host computer, Central data base server. They are also used by banks and large companies. E.g IBM 4381, ICL 2900, NEC 610.


  1. Microcomputers (personal computers):

A computer with a microprocessor and its central processing unit is known as a microcomputer. Micro computer are the smallest computer system. It can be defined as a small, relatively inexpensive computer designed for an individual user. In price, personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet and database management applications. At home, the most popular use for personal computers is for playing games and recently for surfing the Internet. There size range from calculator to desktop size. Its CPU is microprocessor. It also known as Grand child Computer. They do not occupy space as much as mainframes do. When supplemented with a keyboard and a mouse, microcomputers can be called personal computers. A monitor, a keyboard and other similar input-output devices, computer memory in the form of RAM and a power supply unit come packaged in a microcomputer. These computers can fit on desks or tables and prove to be the best choice for single-user tasks. Application : - personal computer, Multi user system, offices.

Microcomputers first appeared in the late 1970s. One of the first and most popular personal computers was the Apple II, introduced in 1977 by Apple Computer. During the late 1970s and early 1980s, new models and competing operating systems seemed to appear daily. Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The principal characteristics of personal computers are that they are single-user systems and are based on microprocessors. However, although personal computers are designed as single-user systems, it is common to link them together to form a network. In terms of power, there is great variety. At the high end, the distinction between personal computers and workstations has faded. High-end models of the Macintosh and PC offer the same computing power and graphics capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and DEC.



Personal computers come in different forms such as desktops, laptops and personal digital assistants. Let us look at each of these types of computers.

Personal Computers

  1. Desktop

  2. Laptop

  3. Netbook

  4. PDA

  5. Server

  6. Wearable Computer

  7. Tablet







  1. Desktops: A desktop is intended to be used on a single location. The spare parts of a desktop computer are readily available at relatively lower costs. Power consumption is not as critical as that in laptops. Desktops are widely popular for daily use in the workplace and households. A computer designed to fit comfortably on top of a desk, typically with the monitor sitting on top of the computer. Desktop model computers are broad and low, whereas tower model computers are narrow and tall. Because of their shape, desktop model computers are generally limited to three internal mass storage devices. Desktop models designed to be very small are sometimes referred to as slimline models.http://t0.gstatic.com/images?q=tbn:and9gcsrx63ws0d4qoci1fmxmhn3dwhtsnobeftt9mqcsnhvuuczfep6

  2. Laptops: Similar in operation to desktops, laptop computers are miniaturized and optimized for mobile use. Laptops run on a single battery or an external adapter that charges the computer batteries. They are enabled with an inbuilt keyboard, touch pad acting as a mouse and a liquid crystal display. The screen folds down onto the keyboard when not in use. Their portability and capacity to operate on battery power have proven to be of great help to mobile users.

  3. Netbooks: They fall in the category of laptops, but are inexpensive and relatively smaller in size. They had a smaller feature set and lesser capacities in comparison to regular laptops, at the time they came into the market. But with passing time, netbooks too began featuring almost everything that notebooks had. By the end of 2008, netbooks had begun to overtake notebooks in terms of market share and sales. An extremely lightweight personal computer. Notebook computers typically weigh less than 6 pounds and are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook computer and a personal computer is the display screen. Notebook computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and non-bulky display screen. The quality of notebook display screens varies considerably. In terms of computing power, modern notebook computers are nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives. However, all this power in a small package is expensive. Notebook computers cost about twice as much as equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run them without plugging them in. However, the batteries need to be recharged every few hours. http://info.padalog.com/portals/112603/images/netbooks.jpg

  4. Personal Digital Assistants (PDAs): PDA Short for personal digital assistant, a handheld device that combines computing, telephone/fax, and networking features. A typical PDA can function as a cellular phone, fax sender, and personal organizer. Unlike portable computers, most PDAs are pen-based, using a stylus rather than a keyboard for input. This means that they also incorporate handwriting recognition features. Some PDAs can also react to voice input by using voice recognition technologies. The field of PDA was pioneered by Apple Computer, which introduced the Newton Message Pad in 1993. Shortly thereafter, several other manufacturers offered similar products. To date, PDAs have had only modest success in the marketplace, due to their high price tags and limited applications. However, many experts believe that PDAs will eventually become common gadgets. PDAs are also called palmtops, hand-held computers and pocket computers. It is a handheld computer and popularly known as a palmtop. It has a touch screen and a memory card for storage of data. PDAs can also be used as portable audio players, web browsers and smart phones. Most of them can access the Internet by means of Bluetooth or Wi-Fi communication.

  5. Servers: They are computers designed to provide services to client machines in a computer network. They have larger storage capacities and powerful processors. Running on them are programs that serve client requests and allocate resources like memory and time to client machines. Usually they are very large in size, as they have large processors and many hard drives. They are designed to be fail-safe and resistant to crash.oqo_model02_thumbs_menu

  6. Wearable Computers: A record-setting step in the evolution of computers was the creation of wearable computers. These computers can be worn on the body and are often used in the study of behavior modeling and human health. Military and health professionals have incorporated wearable computers into their daily routine, as a part of such studies. When the users' hands and sensory organs are engaged in other activities, wearable computers are of great help in tracking human actions. Wearable computers do not have to be turned on and off and remain in operation without user intervention.

  7. Tablet Computers: Tablets are mobile computers that are very handy to use. They have the combine features of laptops and handhelds. They use the touch screen technology. Tablets come with an onscreen keyboard or use a stylus or a digital pen. Apple's iPad redefined the class of tablet computers. These were some of the different types of computers used today. Looking at the rate of advancement in technology, we can definitely look forward to many more types of computers in the near future. philips_tablet




  1. Minicomputers:

It is a midsize computer. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users simultaneously. They are more compact and less expensive. In terms of size and processing capacity, minicomputers lie in between mainframes and microcomputers. Minicomputers are also called mid-range systems or workstations. The term began to be popularly used in the 1960s to refer to relatively smaller third generation computers. They took up the space that would be needed for a refrigerator or two and used transistor and core memory technologies. The 12-bit PDP-8 minicomputer of the Digital Equipment Corporation was the first successful minicomputer. These are also small general purpose system. They are generally more powerful and most useful as compared to micro computer. It behave like terminals at remote sites and data is entered and sent to the mainframe which acts as a host computer and hence distributed processing is done. Mini computer are also known as mid range computer or Child computer. Application :- Departmental systems, Network Servers, work group system. . E.g. PRIME 9755, IBM system 36.mini_pc_mini_computer_pc


  1. Supercomputers: Supercomputer is a broad term for one of the fastest computers currently available. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). They can calculate at a rate of 400 million numbers every second and are accurate upto 14 decimal places. Perhaps the best known supercomputer manufacturer is Cray Research. The highly calculation-intensive tasks can be effectively performed by means of supercomputers. Quantum physics, mechanics, weather forecasting, molecular theory are best studied by means of supercomputers. Their ability of parallel processing and their well-designed memory hierarchy give the supercomputers, large transaction processing powers. eka-india-s-super-computer

Super computer are those computer which are designed for scientific job like whether forecasting and artificial intelligence etc. They are fastest and expensive. A super computer contains a number of CPU which operate in parallel to make it faster. It also known as grand father computer. Application – whether forecasting, weapons research and development. E.g. cray-1, CYBER 205.

THE END


****************************************************

AN EASY APPROACH TO COMPUTER AND ITS APPLICATION IN PHARMACY


Download 92.33 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2020
send message

    Main page