Early programming languages were designed for specific kinds of tasks. Modern languages are more general-purpose. In any case, each language has its own characteristics, vocabulary, and syntax. While this page will not by any means cover all of the available programming languages, we will look at a number of the better-known languages.
One of the earliest computer languages, FORTRAN (an acronym for FORmula TRANslator) was designed to handle mathematical operations, originally on mainframe computers. FORTRAN was unable to handle text manipulations of any sort, and could just barely place quoted text in its printed output.
The COmmon Business Oriented Language, or COBOL, is almost the exact opposite of FORTRAN. COBOL was devised to permit programs to be written bor business data processing applications, using English-like statements. It is intended to handle business data records, so its mathematical capabilities are limited pretty much to dollars and cents, and percentages.
Another mathematically-oriented language, ALGOL (ALGOrithmic Language) does its work primarily in terms of numerical procedures called (surprise!) algorithms.
Named after Blaise Pascal, a French philosopher, mathematician, and physicist, Pascal was specifically designed as a teaching language. Its object was to force the student to correctly learn the techniques and requirements of structured programming. Pascal was designed originally to be platform-independent. That is, a Pascal program could be compiled on any computer, and the result would run correctly on any other computer, even with a different and incompatible type of processor. The result was relatively slow operation, but it did work after its own fashion.
The Beginner's All-purpose Symbolic Instruction Code, BASIC was the first interpreted language made available for general use. It is now in such widespread use that most people see and use this language before they deal with others. It has changed over time, and is now most commonly seen as TRUE Basic or Visual Basic.
Both a compiler and an interpreter, FORTH was originally developed to handle real-time operations and still allow direct user control and rapid program modifications. The name FORTH stems from its conception as a fourth-generation language, but it was developed on a computer system that permitted only five character filenames.
Assembly language is a symbolic representation of the absolute machine code of a particular processor. Therefore, each processor has its own specific assembly language, although a family of processors, such as the Intel 80x86 series, may share some or all of its assembly code.
First came an experimental language called A, which was improved, corrected, and expanded until it was called B. This language in turn was improved, upgraded, and debugged and was finally called C. The C language has turned out to be quite versatile and amazingly powerful. The C language is amazingly simple, and is nevertheless capable of great things.
When the concepts of objects and object-oriented programming were being developed, the standard C language didn't have the built-in structures to handle them. However, C was (and is) still highly useful and well worth keeping around, so a sort of extended C language was developed. This language was essentially "C and then some", or C-plus (C+). As the concepts of object-oriented programming continued to develop, C+ had to be upgraded, and became C++.
The search for a platform-independent language is always in progress. Java is the latest language to be designed to meet this goal. Any computer with a Java Runtime Environment can run a Java program.
One of the more useful aspects of Java is that Web browsers are now designed to be able to embed small Java applications, or "applets," into Web pages. Other Java programs, called "servlets," will run on Web servers. This allows extra communications between the page and the server, and permits a high degree of interactivity and dynamic page generation. The downside of using applets this way is that they inherently run much more slowly than native programs on your computer.
The Practical Extraction and Report Language (Perl) is very similar to C in many respects. However, it has a number of features which make it very useful in a wide range of applications. The most visible use of Perl is in CGI programming for the World Wide Web. Very often when you submit a form to a server, that form will be processed by a program written in Perl.
However, this is not the only use for Perl by any means. Perl is an excellent general-purpose programming that allows fast program development on a wide range of platforms.
Levels of Programming Languages
There is only one programming language that any computer can actually understand and execute: its own native binary machine code. This is the lowest possible level of language in which it is possible to write a computer program. All other languages are said to be high level or low level according to how closely they can be said to resemble machine code.
In this context, a low-level language corresponds closely to machine code, so that a single low-level language instruction translates to a single machine-language instruction. A high-level language instruction typically translates into a series of machine-language instructions.
Low-level languages have the advantage that they can be written to take advantage of any peculiarities in the architecture of the central processing unit (CPU) which is the "brain" of any computer. Thus, a program written in a low-level language can be extremely efficient, making optimum use of both computer memory and processing time. However, to write a low-level program takes a substantial amount of time, as well as a clear understanding of the inner workings of the processor itself. Therefore, low-level programming is typically used only for very small programs, or for segments of code that are highly critical and must run as efficiently as possible.
High-level languages permit faster development of large programs. The final program as executed by the computer is not as efficient, but the savings in programmer time generally far outweigh the inefficiencies of the finished product. This is because the cost of writing a program is nearly constant for each line of code, regardless of the language. Thus, a high-level language where each line of code translates to 10 machine instructions costs only one tenth as much in program development as a low-level language where each line of code represents only a single machine instruction.
In addition to the distinction between high-level and low-level languages, there is a further distinction between compiler languages and interpreter languages. Let us look at the various levels.
Absolute Machine Code
The very lowest possible level at which you can program a computer is in its own native machine code, consisting of strings of 1's and 0's and stored as binary numbers. The main problems with using machine code directly are that it is very easy to make a mistake, and very hard to find it once you realize the mistake has been made.
Assembly language is nothing more than a symbolic representation of machine code, which also allows symbolic designation of memory locations. Thus, an instruction to add the contents of a memory location to an internal CPU register called the accumulator might be add a number instead of a string of binary digits (bits).
No matter how close assembly language is to machine code, the computer still cannot understand it. The assembly-language program must be translated into machine code by a separate program called an assembler. The assembler program recognizes the character strings that make up the symbolic names of the various machine operations, and substitutes the required machine code for each instruction. At the same time, it also calculates the required address in memory for each symbolic name of a memory location, and substitutes those addresses for the names. The final result is a machine-language program that can run on its own at any time; the assembler and the assembly-language program are no longer needed. To help distinguish between the "before" and "after" versions of the program, the original assembly-language program is also known as the source code, while the final machine-language program is designated the object code.
If an assembly-language program needs to be changed or corrected, it is necessary to make the changes to the source code and then re-assemble it to create a new object program.
Compiler languages are the high-level equivalent of assembly language. Each instruction in the compiler language can correspond to many machine instructions. Once the program has been written, it is translated to the equivalent machine code by a program called a compiler. Once the program has been compiled, the resulting machine code is saved separately, and can be run on its own at any time.
As with assembly-language programs, updating or correcting a compiled program requires that the original (source) program be modified appropriately and then recompiled to form a new machine-language (object) program.
Typically, the compiled machine code is less efficient than the code produced when using assembly language. This means that it runs a bit more slowly and uses a bit more memory than the equivalent assembled program. To offset this drawback, however, we also have the fact that it takes much less time to develop a compiler-language program, so it can be ready to go sooner than the assembly-language program.
An interpreter language, like a compiler language, is considered to be high level. However, it operates in a totally different manner from a compiler language. Rather, the interpreter program resides in memory, and directly executes the high-level program without preliminary translation to machine code.
This use of an interpreter program to directly execute the user's program has both advantages and disadvantages. The primary advantage is that you can run the program to test its operation, make a few changes, and run it again directly. There is no need to recompile because no new machine code is ever produced. This can enormously speed up the development and testing process.
On the down side, this arrangement requires that both the interpreter and the user's program reside in memory at the same time. In addition, because the interpreter has to scan the user's program one line at a time and execute internal portions of itself in response, execution of an interpreted program is much slower than for a compiled program.