Hypercomputing



Download 12.02 Kb.
Date28.01.2017
Size12.02 Kb.
#9259
Kurt Leyba

Digital Computer Design

ECE 5171

Final Report



Hypercomputing



Abstract:

I found out about this subject through Slashdot.com, a small article was written recently about Starbridge a small company in Midvale, UT. They claimed to have developed the first hypercomputer, this was the first time I heard about this term so I got interested and did some research on the topic. This paper is about my finding on this topic.


Introduction:

It is found that in structured analysis an “analog computer” style of programming, yields more favorable solution algorithm than current solution methods. This style of programming is facilitated by VIVA a graphic programming language from StarBridge.

Star Bridge Systems claims they created a reconfigurable “hyper computer” that performs like a super computer but sits on a desktop, uses very little electricity, no need of special cooling and costs as little as $175,000. The way they accomplished this is by connecting about a dozen field programmable gate array (FPGA) chips.

CPU vs FPGA


The kernel of most computer architectures is the CPU, it consists of large number of gates that are wired in various circuits, these circuits implement all the functions needed for it’s operation. CPUs are designed to be general and capable of performing different functions. Anytime an operation is performed only a small fraction of the number of gates is actually used while the rest sit in idle and consume power. For most calculations there is a need for floating point. The quantity of these is limited to the number of CPUs.

In FPGA’s each chip can handle thousands of tasks at the same time in parallel, unlike a microprocessor, which can only do one thing at a time. This is why a couple of FPGA’s can out perform a supercomputer which may have thousands of microprocessors connected together. When an operation is to be performed in a FPGA, the operating system interconnects as many gates as possible to perform the particular operation. This interconnection is existent only for the duration of the operation. Thus a specialized processor is created for the task at hand. It is even possible that when a job does not use all the available gates another completely unrelated task is performed with the unused gates, thus performing to different tasks concurrently.

An example of parallel computing would be calculating b*c+d this would be calculated in 2 steps by a CPU while the FPGA would do it in 1 step provided the gates to be properly configured.

Also FPGA’s can be reconfigured using memory cells connected to the transistors on the chip. This makes them reconfigurable on the fly (eg. In satellites where new programs are just beamed up). This is where FPGA really surpass microprocessors one is able to reconfigure these FPGA’s thousands of times per second to handle different tasks, in optimum way.


Thus there are two way engineers can benefit from FPGA’s:

  1. They exploit inherent parallelism in the algorithm being executed, hence they have a potential to radically reduce the computation elapsed time.

  2. They may be programmed in an analog computer style that bypasses the need to develop solution algorithms to the equations that govern the problem.


VIVA:

The easy configurability of the FPGA gives the developer the ability to tailor the FPGA to specific applications. This however is only useful for developers who know how to design and optimize chips which is a long and difficult process.

So far engineers have been able to create FPGA-based machines that can only perform ingle tasks. This makes the machine great for one purpose but useless for any other. Like WinCom Systems who designed a server about the size of a DVD player that can do the work of 50 or more $5000 mainstream servers and cost only $25,000. This server uses only a couple of FPGA’s.

This is until now, to make gates available to the user without performing machine–level programming, VIVA is used, VIVA performs optimization and simplifiers programming algorithms. VIVA lets designers easily create applications for FPGA-machines. This enables coding by drawing what looks like a wiring diagram. The user has a library of tools that can aid in the design of components. These components can become part of the library or be part of the current task at hand. Various objects of each library can be used to design other components. These can then be used in hierarchical approach when doing very complex calculations.


Analysis:

To use FPGA computers to their full potential, the programmer must attempt to take the best possible advantage of the inherently parallel nature of the FPGA, this will also lead to achieving a high solution speed.

Any operation that exhibits a high degree of natural parallelism can reap great benefits from the parallel nature of FPGAs and Viva. Parallelism on FPGAs is limited only by two factors.


  1. The number of operators that can fit on the available silicon.

  2. The other is the amount of parallelism in the process being programmed. In nearly all algorithms, some processes cannot operate simultaneously, due to one process’s dependence on another process’s results. Programming strategies must take these differences into account. These processes require separate sequential steps no matter how much silicon is available. Necessary sequentialism, however, is more prevalent in some algorithms than others. Therefore, the best candidates for programming on FPGAs are those with minimal necessary sequentialism. For very parallel algorithms, most of the silicon may operate in parallel nearly all the time.

To perform static analysis using FPGA’s efficiently applying the latest degree of inherent parallelism, one can use:



  • One by one direct solution method: This method is not effective because algorithms that are implemented with this approach result in loss of scalability because interprocessor communication time grows relative to computing time.

  • Iterative method: Here each equation is assigned to a separate processor and they all are calculated concurrently. Problem here is if the number of iterations to convergence and the number of loading gets too big the whole purpose of parallel processing is defeated because calculations start lasting too long.

  • Quasi Dynamic step by step: Here two finite elements are treated as dynamic systems & solved step by step in time domain.


Conclusion:

The top of the line “hypercomputers” that Starbridge sells costs about $700,000 it has 22 Xilinx chips that can perform 400 billion floating point operations per second. If this claim is true it will put these machines in the top 200 fastest super computer in the world.

The graphical programming in VIVA is very intuitive, and shows serious promise. The computing technology has embarked on a path of inherent parallelism, if the current rate of FPGA and Viva advancement continues this will lead to the end of the Von Newman architecture. FPGA technology is still in the developing phase but I see FPGA-based systems as the path of tomorrow.
Reference:

Fithian, W. S., Brown, S., Singleterry, R.C. and Storaasli, O.O. 2002. Iterative Matrix Equation Solver for a Reconfigurable FPGA-Based Hypercomputer.


Singleterry, R.C., Sobieszczanski-Sobieski, J., Brown, S. 2002. Field-Programmable gate array computer in structural analysis: An Initial Exploration.
Lyons, D. Super-Cheap Supercomputing? Forbes Magazine, March 2003.

Lyons, D. Chipping Away. Forbes Magazine, April 2003.



Lyons, D. Flexible Flyers. Forbes Magazine, April 2003.

Download 12.02 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page