Unit-1 Introduction



Download 67.7 Kb.
Date28.01.2017
Size67.7 Kb.
#9069
Unit-1

Introduction:

  • Assembly languages are originally designed with a one-to-one correspondence between mnemonics and machine language instructions.

  • Translating from mnemonics to machine language becomes the job of a system program known as an assembler.

  • In the mid-1950 to the development of the original dialect of FORTRAN, the first high- level programming language. Other high-level languages are lisp and algol.

  • Translating from a high-level language to assembly or machine language is the job of the system program known as compiler.

The Art of Language Design:

Today there are thousands of high-level programming languages, and new ones continue coming upcoming years. Human beings use assembly language only for special purpose applications.

Why are there so many programming languages”. There are several possible answers: those are

1.Evolution.



2.special purposes.

3.personal preference.
Evolution:

  • The late 1960s and early 1970s saw a revolution in “structured programming,” in which the go to based control flow of languages like Fortran, Cobol, and Basic2 gave way to while loops, case statements.

  • In the late 1980s the nested block structure of languages like Algol, Pascal, and Ada.

  • In the way to the object-oriented structure of Smalltalk, C++, Eiffel, and so on.

Special Purposes:

Many languages were designed for a specific problem domain.



  • The various Lisp dialects are good for manipulating symbolic data and complex data structures.

  • Snobol and Icon are good for manipulating character strings.

  • C is good for low-level systems programming.

  • Prolog is good for reasoning about logical relationships among data.



Personal Preference:

Different people like different things.



  • . Some people love the terseness of C; some hate it.

  • Some people find it natural to think recursively; others prefer iteration.

  • Some people like to work with pointers; others prefer the implicit dereferencing of Lisp, Clu, Java, and ML.

Some languages are successful than others. So many languages have been designed and some of languages are widely used.

What makes a language successful?” Again there are several answers:



1.Expressive Power.

2.Ease ofUse for the Novice.

3.Ease of Implementation.

4. Open Source.

5. Excellent Compilers.
Expressive Power:

  • One language is more “powerful” than another language in the sense that language

  • Easy to express things and

  • Easy use of fluent.

  • And that language features clearly have a huge impact on the programmer’s ability to write clear, concise, and maintainable code, especially for very large system.

  • Those languages are C, common lisp, APL, algol-68,perl.

Ease of Use for the Novice:

  • That language is easy to learn. Those languages are basic, Pascal, logo, scheme.


Ease of Implementation:

  • That language is easy to implement. Those languages are basic, forth.

  • Basic language is successful because it could be implemented easily on tiny machines, with limited Resources.

  • Forth has a small but dedicated to Pascal language designing.

Open Source:

  • Most programming languages today have at least one open source compiler or interpreter.

  • But some languages—C in particular—are much more closely associated than others with freely distributed, peer reviewed, community supported computing.

Excellent Compilers:

  • Fortran is possibly to compile to very good(fast/small)code.

  • And some languages are used wide dissemination at minimal cost. Those languages are pascal, turing,java.


The Programming Language Spectrum: (or ) classification of programming languages:-

There are many existing languages can be classified into families based on their model of computation.

There are


  1. Declarative languages-it focuses on what the computer is to it do.

  2. Imperative languages-it focus on how the computer should do it.

Declarative languages are in some sense “higher level”; they are more in tune with the programmer’s point of view, and less with the implementor’s point of view.

Imperative languages predominate, it mainly for performance reasons.below figure :1 shows a common set of families.

declarative

functional Lisp/Scheme, ML, Haskell

dataflow Id, Val

logic, constraint-based Prolog, spreadsheets

template-based XSLT

imperative

von Neumann C, Ada, Fortran, . . .

scripting Perl, Python, PHP, . . .

object-oriented Smalltalk, Eiffel, C++, Java, . . .


Within the declarative and imperative families, there are several important subclasses.
Declarative:

(a)Functional languages employ a computational model based on the recursive definition of functions. They take their inspiration from the lambda calculus.Languages in this category include Lisp, ML, and Haskell.

(b) Dataflow languages model computation as the flow of information (tokens)among primitive functional nodes. Languages in this category include Id and Val are examples of dataflow languages.


(c)Logic or constraint-based languages take their inspiration from predicate logic.They model computation as an attempt to find values that satisfy certain specified relationships.

Prolog is the best-known logic language. The term can also be applied to the programmable aspects of spreadsheet systems such as Excel, VisiCalc, or Lotus1-2-3.


Imperative:

  1. von Neumann languages are the most familiar and successful. They include Fortran, Ada 83, C, and all of the others in which the basic means of computation is the modification of variables.

(b)Scripting languages are a subset of the von Neumann languages. Several scripting languages were

originally developed for specific purposes: csh and bash,

for example, are the input languages of job control (shell) programs; Awk was intended for text manipulation; PHP and JavaScript are primarily intended for the generation of web pages with dynamic content (with execution on the server and the client, respectively). Other languages, including Perl, Python, Ruby, and

Tcl, are more deliberately general purpose.

.(c) Object-oriented languages are more closely related to the von Neumann languages but have a much more structured and distributed model of both memory and computation.

Smalltalk is the purest of the object-oriented languages; C++ and Java are the most widely used.


Why Study Programming Languages?
Programming languages are central to computer science and to the typical computer science curriculum.

For one thing, a good understanding of language design and implementation can help one choose the most appropriate language for any given task.


Reasons about studying programming languages:

i) Improved background choosing appropriate language.

ii) Increase ability to easier to learn new languages.

iii) Better understanding the significance of implementation.

iv) Better understanding obscure features of languages.

v) Better understanding how to do things in languages that don’t support them explicitly.




  1. Improved background choosing appropriate language:

Different languages are there for system programming. Among all languages we are choose better language.

Eg:c vs modula-3 vs c++



  1. Different languages are using for numerical computations. Comparision among all these languages choose better language.

e.g:fortran vs APL vs ADA.

iii) Ada vs modula-2 languages are using for embedded systems. Comparision among those language to choose better one.



  1. Some languages are used for symbolic data manipulation. E.g:commonLISP vs scheme vs ML. these languages are using for manipulate symbolic data.

  2. Java vs c/corba for networked Pc programs.



ii) Increase ability easier to learn new languages:

This is one of the reason to study programming languages. Because make it easier to learn new languages some languages are similar;

Some concepts in most of the programming language are similar. If you think in terms of iteration, recursion, abstraction.

And also find it easier to assimate the syntax and semantic details of a new language.

Think of an analogy to humanlanguages; good grasp of grammer makes it easier to pick up new languages.

iii) Better understanding the significance of implementation:

This is one of the reason to study programming ;llanguages. Because understand implementation costs: those are

i)use simple arithmetic equal(use X*X instead of X**2).

ii)use C- pointers or pascal with statement to factor address calculations.

iii)avoid call by value with large data items in pascal.

iv)avoid the use of call by name in algol60.



iv) Better understanding obscure features of languages:

i)in c,help you understand unions,arrays &pointers ,separate compilation, varargs, catch&throw.

ii)in commonlisp, help you understand first class functions/ closures, streams, catch and throw , symbol internals.

v) Better understanding how to do things in languages that don’t support them explicitly:

i)lack of suitable control structures in fortran.

ii)lack of recursion in fortran.

iii)lack of named constants and enumerations in fortran.

iv)lack of iterations.

v)lack of modules in C and pascal use comments and program discipline.



Compilation and Interpretation:
Compilation:

The compiler translates the high-level source program into an equivalent target program (typically in machine language) and then goes away.






Interpretation:

An alternative style of implementation for high-level languages is known as interpretation.





  • Interpreter stays around for the execution of the application.

  • In fact, the interpreter is the locus of control during that execution.

  • Interpretation leads to greater flexibility and better diagnostics (error messages) than does compilation.

  • Because the source code is being executed directly, the interpreter can include an excellent source-level debugger.

  • Delaying decisions about program implementation until run time is known as latebinding;


Compilation vs interpretation:

  1. Interpretation is greater flexibility and better diagnostics than compilation.

  2. Compilation is better performance than interpretation.

  3. Most of languages implementations mixture of both compilation & interpretation shown in below fig:





  1. We say that a language is interpreted when the initial translator is simple.

  2. If the translator is complicated we say that the language is compiled.

  3. The simple and complicated are subjective terms, because it is possible for a compiler to produce code that is executed by a complicated virtual machine( interpreter).


Different implementation strategies:
Preprocessor: Most interpreted languages employ an initial translator (a preprocessor) that perform

  • Removes comments and white space, and groups characters together into tokens, such as keywords, identifiers, numbers, and symbols.

  • The translator may also expand abbreviations in the style of a macro assembler.

  • Finally, it may identify higher-level syntactic structures, such as loops and subroutines.

  • The goal is to produce an intermediate form that mirrors the structure of the source but can be interpreted more efficiently.

  • In every implementations of Basic, removing comments from a program in order to improve its performance. These implementations were pure interpreters;

  • Every time we reread (ignore) the comments during execution of the program.They had no initial translator.

Linker:

  • The typical Fortran implementation comes close to pure compilation. The compiler translates Fortran source into machine language.

  • however, it counts on the existence of a library of subroutines that are not part of the original program. Examples include mathematical functions (sin, cos, log, etc.) and I/O.

  • The compiler relies on a separate program, known as a linker, to merge the appropriate library routines into the final program:





Post –compilation assembly:

  • Many compilers generate assembly language instead of machine language.

  • This convention facilitates debugging, since assembly language is easier for people to read, and isolates the compiler from changes in the format of machine language files.




C-preprocessor:

  • Compilers for c begin with a preprocessor that removes comments and expand macros.

  • This allows several versions of a program to be built from the same source.



Source- to – source translation (C++):

  • C++ implementations based on the early AT&T compiler generated an intermediate program in c instead of assembly language.




Boot strapping:

This compiler could be “run through itself” in a process known as boot strapping.

Many early Pascal compilers were built around a set of tools distributed by NiklausWirth. These included the following.

– A Pascal compiler, written in Pascal, that would generate output in P-code, a simple stack-based language.

– The same compiler already translated into P-code.

– A P-code interpreter, written in Pascal.



Dynamic and just-in time compilation:

  • In some cases a programming system may deliberately delay compilation until the last possible moment.

  • One example occurs in implementations of Lisp or Prolog that invoke the compiler on the fly, to translate newly created source into machine language, or to optimize the code for a particular input set.

  • Another example occurs in implementations of Java. The Java language definition defines a machine-independent intermediate form known as byte code.

  • Byte code is the standard format for distribution of Java programs; it allows programs to be transferred easily over the Internet and then run on any platform.

  • The first Java implementations were based on byte-code interpreters, but more recent (faster) implementations employ a just-in-time compiler that translates byte code into machine language immediately before each execution of the program.

Microcode:

  • The assembly-level instruction set is not actually implemented in hardware but in fact runs on an interpreter.

  • The interpreter is written in low-level instructions called microcode (or firmware), which is stored in read-only memory and executed by the hardware.



Programming environments:

  • Compilers and interpreters do not exist in isolation. Programmers are assisted in their work by a host of other tools.

  • Assemblers, debuggers, preprocessors, and linkers were mentioned earlier.

  • Editors are familiar to every programmer. They may be assisted by cross-referencing facilities that allow the programmer to find the point at which an object is defined, given a point at which it is used.

  • Configuration management tools help keep track of dependences among the (many versions of) separately compiled modules in a large software system.

  • Perusal tools exist not only for text but also for intermediate languages that may be stored in binary.

  • Profilers and other performance analysis tools often work in conjunction with debuggers to help identify the pieces of a program that consume the bulk of its computation time.

  • In older programming environments, tools may be executed individually, at the explicit request of the user. If a running program terminates abnormally with a “bus error” (invalid address) message,

  • for example, the user may choose to invoke a debugger to examine the “core” file dumped by the operating system.

  • He or she may then attempt to identify the program bug by setting breakpoints, enabling tracing, and so on, and running the program again under the control of the debugger.

  • More recent programming environments provide much more integrated tools.

  • When an invalid address error occurs in an integrated environment, a new window is likely to appear on the user’s screen, with the line of source code at which the error occurred highlighted.

  • Breakpoints and tracing can then be set in this window without explicitly invoking a debugger.

  • Changes to the source can be made without explicitly invoking an editor.

  • The editor may also incorporate knowledge of the language syntax, providing templates for all the standard control structures, and checking syntax as it is typed in.

  • In most recent years, integrated environmens have largely displaced command-line tools for many languages and systems.

  • Popular open saource IDEs include Eclipse, and netbeans.

  • Commercial systems include the visual studio environment from Microsoft and Xcode environment from apple.

  • Much of the appearance of integration can also be achieved with in sophisticated editors such as emacs.


An overview of compilation:


Fig: phases of compilation

1. Scanner

2. Parser.

3. Semantic analysis

4. Intermediate code generator.

5. Code generator.

6. Code optimization.


  • The first few phases (upto semantic analysis) serve to figure out the meaning of the source program.

  • They are sometimes called the front end of the compiler.

  • The last few phases serve to construct an equivalent target program.

  • They are sometimes called the backend of the compiler.

  • Many compiler phases can be created automatically from a formal description of the source and /or target languages.


Lexical analysis:

Scanning is also known as lexical analysis. The principal purpose of the scanner is to simplify the task of the parser by reducing the size of the input (there are many more characters than tokens) and by removing extraneous characters like white space.


e.g: program gcd(input, output);

var i, j : integer;

begin

read(i, j);



while i <> j do

if i > j then i := i - j

else j := j - i;

writeln(i)

end.
The scanner reads characters (‘p’, ‘r’, ‘o’, ‘g’, ‘r’, ‘a’, ‘m’, ‘ ’, ‘g’, ‘c’, ‘d’, etc.) and groups them into tokens, which are the smallest meaningful units of the program. In our example, the tokens are



Syntax analysis:

A context-free grammar is said to define the syntax of the language; parsing is therefore known as syntactic analysis.


Semantic analysis:

  • Semantic analysis is the discovery of meaning in a program.

  • The semantic analysis phase of compilation recognizes when multiple occurrences of the same

identifier are meant to refer to the same program entity, and ensures that the uses are consistent.

  • The semantic analyzer typically builds and maintains a symbol table data structure that maps each identifier to the information known about it.


Target code generation:

  • The code generation phase of a compiler translates the intermediate form into the target language.

  • To generate assembly or machine language, the code generator traverses the symbol table to assign locations to variables,and then traverses the intermediate representation of the program.


Code improvement:

code improvement is often referred to as optimization.

Download 67.7 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page