Advanced computer architecture



Download 60.74 Kb.
Date31.01.2017
Size60.74 Kb.
#14058

A.L.MAYUKHA 1210314501 B5 3/4 dated- 10/7/2016
ADVANCED COMPUTER ARCHITECTURE

ASSIGNMENT-1

1. Describe about four characteristics of MIMD multiprocessors that distinguish them from multi computer systems or computer networks?

MIMD: Multiple Instructions and, Multiple Data. The MIMD class of parallel architecture is the most familiar and possibly most basic form of parallel processor. MIMD architecture consists of a collection of N independent, tightly coupled processors, each with memory that may be common to all processors, and /or local and not directly accessible by the other processors.

The following are the characteristics of mimd multiprocessor that distinguish them from multi computer systems:

CHARACTERISTICS:


Complexity of architectures And Cost: complexity is high and the cost is medium.

Low synchronization overheads: Explicit data structures and operations needed

Efficient execution of variable-time instruction: Total execution time equals the maximum execution time on a given processor

Lower instruction cost: One decoder in each PE

2. A) Parallelism vs. Pipelining



Handler has proposed classification scheme for identifying the parallelism degree and pipelining degree built into the hardware structures of a computer system.

He considered parallel-pipeline processing at three subsystem levels

  1. Processor Control Unit(PCU)

  2. Arithmetic Logic Unit(ALU)

  3. Bit-level circuit(BLC)

A computer system C can be characterized by a triple containing six independent entities, as defined below:

T(C) =

1, we drop it, since pipelining of one stage or of one unit is meaningless.

Where K= the number of processors within the computer

D = the number of ALUs under the control of one PCU

W= the word length of an ALU or of a PE

W’= the number of pipeline stages in all ALUs or in a PE

D’= the number of ALUs that can be pipelined

K’= the number of PCUs that can be pipelined

Several real computer examples are used to clarify the above parametric descriptions. The Texas Instrument’s Advanced Scientific Computer (TI-ASC) has one controller controlling four arithmetic pipelines each has 64-bit word lengths and eight stages. Thus we have

T (ASC) = <1 x 1, 4 x 1, 64x8>=<1, 4, 64 x 8>

Whenever the second entity, K’, D’, or W’, equals

1) we drop it, since pipelining of one stage or of one unit is meaningless.

c:\users\srinivas\onedrive\pictures\h1.png

c:\users\srinivas\onedrive\pictures\h2.png

b). Control versus Data Parallelism:



The concepts of control Flow and Data flow computing are distinguished by the control of computation sequences in two distinct program representations.

  • Control flow computers use shared memory to hold program instructions and data objects

  • In data flow computers, the execution of instruction is driven by data availability instead of being guided by a program counter

  • Computational results (data tokens) are passed directly between instructions.

3. Summarize all forms of parallelism that can be exploited at different levels of a computer system, including multi and uni processor approach. Indicate example computers that have achieved various forms of parallelism?

There are four ascending levels. They are-

1) Data Processing

2) Information Processing

3) Knowledge Processing

4) Intelligence Processing



-> The dataspace is the largest, including numeric numbers in various formats, character symbols and multidimensional; measures

-> Data objects are mutually unrelated. Huge amounts of data is being generated especially among the scientific, business and government sectors

-> An Information is a collection of data objects that are related by some syntactic relation. Therefore information forms a subspace of the dataspace

-> Knowledge consists of information items with some semantic meanings and thus, knowledge is the subspace of information space

-> Intelligence is the derived from a collection of knowledge items and innermost triangle in the Venn diagram

-> of today’s computers many users are shifting to computer roles from pure data processing to information processing. A high degree of parallelism has been found at these levels.

Basic Uniprocessor Architecture:

  • A typically uniprocessor computer consists of the three major components the main memory, the CPU (Central Processing Unit) and the I/O(Input-Output) subsystem.

  • There are two architectures of commercially available uniprocessor computers to show the relation between three subsystems



  1. System Architecture of the supermini VAX-11/780 uniprocessor system



  • The CPU contains the master controller of the VAX system

  • There are 16, 32-bit general purpose register one of which is a Program Counter (PC).There is also a special CPU status register containing about the current state of the processor being executed

  • The CPU contains an ALU with an optional Floating-point accelerator, and some local cache memory with an optional diagnostic memory

  • The CPU can be intervened by the operator through the console connected to floppy disk

  • The CPU, the main memory( 2^32 words of 32 bit each) and the I/O subsystem are all connected to a common bus, the synchronous backplane interconnection(SBI)

  • Through this bus, all I/O devices can communicate with each other with CPU or with the memory

  • I/O devices can be connected directly to the SBI through the unibus and its controller or through a mass bus and its controller



  1. System Architecture of the mainframe IBM system 370/model 168 uniprocessor computer






  • The CPU contains the instruction decoding and execution units as well as cache

  • Main memory is divided into four units referred to as logical storage units (LSU) that are four ways interleaved

  • The storage controller provides multiport Connections between the CPU and the four

  • LSU’s



  • Peripherals are connected to the system via high speed I/O channels which operate asynchronously with the CPU

  • Parallelism in Multiprocessor Systems

  • Parallel processing systems achieve parallelism by having more than one processor performing tasks simultaneously. Since multiprocessor systems are more complicated than uniprocessor systems, there are many different ways to organize the processors and memory, so a researcher, Michael J. Flynn proposed a classification based on the flow of instructions and data within the computer called Flynn’s classification

4. Write about Parallel Processing Applications?

Ans:

Fast and efficient computers have high demand in many fields like scientific, engineering, energy resource, medical, military and basic research areas. To design a cost-effective super computer or to utilize an existing parallel computer one must identify the computational needs of important applications. With changing trends, we introduce only the major computations and leave the readers to identify their own computational needs.

Theoretical scientists develop mathematical models that computer engineers solve numerically. The numerical results may then suggest new theories. Experimental science provides data for computational science.

1. Predictive modeling and simulations:

Multidimensional modeling of the atmosphere, the earth environment, outer space, and the world economy has become a major concern of world scientists.

--Numerical weather forecasting: Weather and climate researchers will never run out of their need for faster computers. Weather modeling is necessary for short-range forecasts and for long-range hazard predictions, such as flood, drought, and environmental pollutions.

--Oceanography and astrophysics since oceans can store and transfer heat and exchange it with the atmosphere, a good understanding of the oceans would help in the following

A.Climate predictive analysis

B.Fishery management

C.ocean resource exploration

D.Coastal dynamics and tides

--Socioeconomics and government use: Large computers are in great demand in the areas of econometrics, social engineering, government census, crime control, and the modeling of the world economy for the year 2000

2. Engineering design and automation:

Supercomputers are in super demand for solving many engineering design procedures.

--Finite-element analysis: the design of dams, bridges, ships, supersonic jets, high buildings, and space vehicles requires the resolution of large system of algebraic equations of partial differential equations. Computational engineers have developed finite-e dynamic analysis of structures. High-order finite elements are used to describe the spatial behavior element code for the. dynamic analysis of structures. High-order finite elements are used to describe the spatial behavior

--Computational aerodynamics: Large-scale computers have made significant contributions in providing new technological capabilities and economics in pressing ahead with aircraft and spacecraft lift and turbulence studies.



--Artificial intelligence and automation: Intelligent I/O interfaces are being demanded for super computers that must directly communicate with human beings in images, speech, and natural languages.

--Remote sensing applications: Computer analysis of remotely sensed earth-resource data has many potential applications in agriculture, forestry, and water resources. Explosive amounts of pictorial information need to be processed in this area.

3. Energy resources Exploration: Using computers in the energy area results in less production costs and higher safety measures.

--Seismic exploration: this sets off a sonic wave by explosive or by jamming a heavy hydraulic ram into the ground and vibrating it in a controlled pattern. The demand of cost effective computers for seismic signal processing is increasing sharply.

--Reservoir modeling: Super computers are being used to perform three-dimensional modeling of oil fields. The reservoir problem solved by using the finite difference method on the three-dimensional representation of the field. Geologic core samples are examined to project forward into time the field’s expected performance.

--Plasma fusion power: Nuclear fusion researchers are pushing to use a computer 100 times more powerful than any existing one to model the plasma dynamics. Synthetic nuclear fusion requires the heating of plasma to a temperature of 100 million degrees. This is a very costly effort.

--Nuclear reactor safety: these studies attempt to provide for:

On-line analysis of reactor condition

Automatic control for normal and abnormal operations

Quick assessment of potential accident mitigation procedures

4. Medical, military, and basic research:

In the medical area, fast computers are needed in computer assisted tomography, artificial heart design, liver diagnosis, brain damage estimation, and genetic engineering studies. Military defense needs to use supercomputers for weapon design, effects simulation, and other electronic warfare.

5. Mention all the mechanisms of parallel processing and explain about balancing bandwidth for subsystems?



Parallel processing is the process of breaking down program instruction by the computer and running it through a number of different processors. However in a uniprocessor this isn’t possible. So we have a number of different mechanisms to make a uniprocessor system behave like a multiprocessor system. The different ways it is possible is:

  1. Multiplicity of functional units



  1. Parallelism and pipelining within the CPU



  1. Overlapped CPU and I/O operations



  1. Use of a hierarchical memory systems



  1. Balancing of subsystem bandwidths



  1. Multiprogramming and time sharing

  • Balancing of subsystem bandwidths:

The bandwidth of a system is defined as the number of operation performed per unit time. In the case of a main memory system, the memory bandwidth is measured by the number of memory words that can be accessed per unit time.

In general, the CPU is the fastest unit in a computer, with a processor cycle of t p ; main memory cycle time of t m;and I/O devices average access time of t d , it is observed that:

td>tm>tp

Bm=W/tm
Memory access conflicts may cause delayed access of some of the processor requests.

Therefore, the utilized memory bandwidth Bn​m.

Bn m = B/ M

For the bandwidth of external memory and I/O devices, the average access rate per tape is 1 megabyte/ sec. So for 10 tapes it would be 10 megabytes/sec

. A modern magnetic tape unit has a data transfer rate around 1.5 megabytes/s.

The bandwidth of a processor is measured as the maximum CPU computation rate B,

p​

As in 160 megaflops for Cray-1 and 12.5 million instructions per second for IBM 370/168. In practice the utilized CPU rate is less than the bandwidth.

Bmp = Rw / T

The following relationship has been observed between the bandwidths of major subsystems in a high performance uniprocessor:

Bm ≥ Bmμ Bp Bpη ≥ Bd

This implies that the main memory has the highest bandwidth, since it must be updated by the CPU and I/O. Due to unbalanced speeds we need to match the processing power of the three subsystems.

Bandwidth balancing between CPU and memory:

The speed gap between CPU and the main memory can be closed up by using the fast cache memory between them. The cache should have an access time equal to processor time. A block of memory is moved from the main memory into the cache so that immediate instructions can be available most of the time from the cache. The cache serves as a data buffer.

Bandwidth balancing between memory and I/O devices

Input-output channels with different speeds can be used between the slow I/O devices and the main memory. These I/O channels perform buffering and multiplexing functions to transfer the data from multiple disks into the main memory bu stealing cycles from the CPU. Furthermore, intelligent disk controllers or database machines can be used to filter out the irrelevant data just off the tracks of the disk. This filtering will alleviate the I/O channel saturation problem. The combined buffering, multiplexing and filtering operations thus can provide a faster, more effective data transfer rate, matching that of the memory.

In the ideal case, we wish to achieve a totally balanced system, in which the entire memory bandwidth matches the bandwidth sum of the processor and I/O devices that is

Bpμ + Bd = Bmμ

6. Write about Architectural Classification Schemes?



Three computer architectural schemes are presented in this:

1. Flynn’s Classification

Computer organisation are characterised by the multiplicity of the hardware provided to service the instructions and data streams. Listed below are Flynn’s four machine organisations:

  • Single instruction single data stream(SISD)



  • Single instruction multiple data stream(SIMD)



  • Multiple instruction single data stream(MISD)



  • Multiple instruction multiple data stream(MIMD)

SISD computer organisation:

Instructions are executed sequentially but may be overlapped in their execution stages (pipelining). Most SISD uniprocessor systems are pipelined. An SISD computer may have more than one functional unit in it. All the functional units are under one supervision control unit.



SIMD Computer Organisation:

Here, there are multiple processing elements supervised by the same control unit. All PE’s receive the same instruction broadcast from the control unit but operate on different data sets from distinct data streams. The shared memory subsystem may contain multiple modules.




MISD computer organisation:

There are n processor units each receiving distinct instructions operating over the same data stream and its derivatives. The results of one processor become the input of the next processor in the macro pipe. No real embodiment of this class exists.




MIMD Computer Organisation:

Most multiprocessor systems and multiple computer systems can be classified in this category. An intrinsic MIMD computer implies interactions among the n processors because all memory streams are derived from the same data space shared by processors. In the n data streams were derived from disjointed subspaces of the shared memories, then we would have the so called SISD (MSISD) operation, which is nothing but a set of n independent SISD uniprocessor systems



2. Feng’s Classification

This classification is done based on the degree of parallelism to classify various computer architectures. The maximum number of binary digits that can be processed within a unit time by a computer screen is called the maximum parallelism degree. The average parallelism degree can be defined as

T

P a = ∑ P i / T

i=1

In general the utilisation rate is

μ = P a/P

If the computing power if the processor is fully utilised then we have P i = P for all i and μ = 1 for 100 percent utilisation. The utilisation rate depends on the application program executed.



The figure demonstrates the classification of computers by their maximum parallelism degrees. The horizontal axis shows the word length n. The vertical axis corresponds to the bit slice length m. Both length measures are in terms of the number of bits contained in a word or in a bit slice. A bit slice is a string of bits, one from each of the words at the same vertical bit position. The maximum parallelism degree P(C) of a given computer system C is represented by the product of the word length n and the bit-slice length m that is

P(C) =n.m

The pair (n, m) corresponds to a point in the computer space shown by the coordinate system in the figure. The P(C) is equal to the area of the rectangle defined by the integer’s n and m.

There are four types of processing methods that can be seen from the figure.

  • Word-serial and bit-serial(WSBS)



  • Word-parallel and bit-serial(WPSB)



  • Word -serial and bit parallel(WSBP)



  • Word-parallel and bit-parallel(WPBP)

WSBS has been called bit-serial processing because one bit (n=m=1) is processed at a time, a rather slow process. This was done only in the first generation computers. Similarly WPBS (n=1,m>1) , WSBP (n>1, m=1) and WPBP(n>1,m>1) is known as a fully parallel processing in which an array of n,m bits is processed at one time

3. Handler’s Classification

Handler has proposed a classification scheme for identifying the parallelism degree and pipelining degree built into the hardware structures of a computer system. He considers parallel-pipeline processing at three subsystem levels:

  • Processor control unit(PCU)



  • Arithmetic logic unit(ALU)



  • Bit-level circuit(BLC)

The functions of PCU and ALU should be clear to us. Each PCU corresponds to one processor or one CPU. The ALU is equivalent to processing element (PE) we specified for SIMD processors. The BLC corresponds to the combinational logic circuitry needed to perform 1-bit operation in the ALU.

A computer system C can be characterised by a triple containing six independent entities, as defined below:

T (C) = < K × K′, D × D′, W × W >

Where K= the number of processors (PCU’s) within the computer

D=the number of ALU’s under the control of one PCU

W=the word length of an ALU or of a PE

W’=the number of pipeline stages in all ALUs or in a PE D’= the number of ALUs that can be pipelined

K’= the number of PCUs that can pipelined

Several real computer examples are used to clarify the above parametric descriptions





Download 60.74 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page