The four basic functions of a computer are input, processing, output and storage:
-
Input is the information which is entered into the computer.
-
Processing is performing operations on or manipulating data.
-
Output is the result of the data processing.
-
Storage refers to devices that can retain the data when the computer is deactivated.
The central processing unit (CPU) does process the data. Devices such as read only memory (ROM), the hard drive, compact disks (CDs) and digital versatile disks (DVDs) can store the data. When you input information into your computer with the mouse or keyboard, you're sending a signal to the CPU. The CPU has a logic unit that can do basic arithmetic. The control unit directs the computer to execute programs that have been stored in memory. The speed by which a computer executes programs is measured in millions of instructions per second (MIPS); the processor's speed is measured in gigahertz (GHz). When the information has been processed, it is output in a human-readable form through the monitor and speakers. It can also be stored again for later processing. Storage media can be used to both input and output data.
The four basic functions of a computer make it possible for us to perform many tasks that were previously impossible. Using a computer, you can balance your checkbook, purchase merchandise, send and receive messages, do research, process your photographs, create music and store crucial data, among other things. If you have essential computer skills you can find better employment for higher pay. Because computers are easily networked, they can help people from remote parts of the world communicate more quickly and easily than with traditional methods.
Computers can be addictive. Computer gaming, in particular, can cause people to abandon taking care of essential responsibilities. Working long hours at a computer can contribute to eye strain, repetitive strain injury (RSI) and lower back pain. Many people may forget to eat or exercise when on a computer for long periods. Using ergonomic devices and furniture and taking frequent breaks can help to prevent many of these computer-related health issues (see Resources below).
A unit of measurement is a definite magnitude of a physical quantity, defined and adopted by convention and/or by law, that is used as a standard for measurement of the same physical quantity. In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the information contents or entropy of random variables.
The most common units are the bit, the capacity of a system which can exist in only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (binary power prefixes). Information capacity is a dimensionless quantity, because it refers to a count of binary symbols.
-
A bit (a contraction of binary digit) is the basic capacity of information in computing and telecommunications;a bit represents either 1 or 0 (one or zero) only. The representation may be implemented, in a variety of systems, by means of a two state device. In computing, a bit can be defined as a variable or computed quantity that can have only two possible values. These two values are often interpreted as binary digits and are usually denoted by the numerical digits 0 and 1. The two values can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The length of a binary number may be referred to as its "bit-length." In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability,[1] or the information that is gained when the value of such a variable becomes known.[2] In quantum computing, a quantum bit or qubit is a quantum system that can exist in superposition of two bit values, "true" and "false".
-
The byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, a byte was the number of bits used to encode a singlecharacter of text in a computer[1][2] and for this reason it is the basic addressable element in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. With ISO/IEC 80000-13, this common meaning was codified in a formal standard. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The popularity of major commercial computing architectures have aided in the ubiquitous acceptance of the 8-bit size.[3]
-
The kilobyte (symbol: kB) is a multiple of the unit byte for digital information. Although the prefix kilo- means 1000, the term kilobyte and symbol kBor KB have historically been used to refer to either 1024 (210) bytes or 1000 (103) bytes, dependent upon context, in the fields of computer science andinformation technology.[1][2][3]
-
The megabyte is a multiple of the unit byte for digital information storage or transmission with three different values depending on context: 1048576bytes (220) generally for computer memory;[1][2] and one million bytes (106, see prefix mega-) generally for computer storage.[1][3] In rare cases, it is used to mean 1000×1024 (1024000) bytes.[3] The IEEE Standards Board has confirmed that mega- means 1000000, with exceptions allowed for the base-two meaning.[3] It is commonly abbreviated as Mbyte or MB (compare Mb, for the megabit).
-
The gigabyte is a multiple of the unit byte for digital information storage. The prefix giga means 109 in the International System of Units (SI), therefore 1 gigabyte is 1000000000 bytes. The unit symbol for the gigabyte is GB or Gbyte, but not Gb (lower case b) which is typically used for the gigabit.. Historically, the term has also been used in some fields of computer science and information technology to denote the gibibyte, or 1073741824 (10243 or 230) bytes.
-
The terabyte is a multiple of the unit byte for digital information. The prefix tera means 1012 in the International System of Units (SI), and therefore 1 terabyte is 1000000000000bytes, or 1 trillion (short scale) bytes, or 1000 gigabytes. 1 terabyte in binary prefixes is 0.9095 tebibytes, or 931.32gibibytes. The unit symbol for the terabyte is TB or TByte, but not Tb (lower case b) which refers to terabit.
-
A petabyte (derived from the SI prefix peta- ) is a unit of information equal to one quadrillion (short scale) bytes, or 1024 terabytes. The unit symbol for the petabyte is PB. The prefix peta (P) indicates the fifth power to 1000:
1 PB = 1000000000000000B = 10005 B = 1015 B = 1 million gigabytes = 1 thousand terabytes
The pebibyte (PiB), using a binary prefix, is the corresponding power of 1024, which is more than 12% greater (250 bytes = 1125899906842624bytes).
Table of units of measure
Unit Equivalent
1 kilobyte (KB) 1,024 bytes
1 megabyte (MB) 1,048,576 bytes (1,024 KB)
1 gigabyte (GB) 1,073,741,824 bytes (1,024 MB)
1 terabyte (TB) 1,099,511,627,776 bytes (1,024 GB)
1 petabyte (PB) 1,125,899,906,842,624 bytes (1,024 TB)
Share with your friends: |