Languages such as C and assembly language have simple macro systems, implemented as preprocessors to the compiler or assembler. C preprocessor macros work by simple textual search-and-replace at the token, rather than the character, level. A classic use of macros is in the computer typesetting system TeX and its derivatives, where most of the functionality is based on macros. MacroML is an experimental system that seeks to reconcile static typing and macro systems. Nemerle has typed syntax macros, and one productive way to think of these syntax macros is as a multi-stage computation. Other examples:
-
m4 is a sophisticated, stand-alone, macro processor.
-
TRAC
-
PHP
-
Macro Extension TAL, accompanying Template Attribute Language
-
SMX, for web pages
-
ML/1 Macro Language One
-
The General Purpose Macroprocessor is a contextual pattern matching macro processor, which could be described as a combination of regular expressions, EBNF and AWK
-
SAM76
-
minimac, a concatenative macro processor.
-
troff and nroff, for typesetting and formatting Unix manpages.
See also: Assembly language#Macros and Algorithm
[edit] Procedural macros
Macros in the PL/I are written in a subset of PL/I itself: the compiler executes "preprocessor statements" at compilation time, and the output of this execution forms part of the code that is compiled. The ability to use a familiar procedural language as the macro language gives power much greater than that of text substitution macros, at the expense of a larger and slower compiler.
Frame Technology's frame macros have their own command syntax but can also contain text in any language. Each frame is both a generic component in a hierarchy of nested subassemblies, and a procedure for integrating itself with its subassembly frames (a recursive process that resolves integration conflicts in favor of higher level subassemblies). The outputs are custom documents, typically compilable source modules. Frame Technology can avoid the proliferation of similar but subtly different components, an issue that has plagued software development since the invention of macros and subroutines.
Most assembly languages have less powerful procedural macro facilities, for example allowing a block of code to be repeated N times for loop unrolling; but these have a completely different syntax from the actual assembly language.
[edit] Lisp macros
Lisp's uniform, parenthesized syntax works especially well with macros. Languages of the Lisp family, such as Common Lisp and Scheme, have powerful macro systems because the syntax is simple enough to be parsed easily. Lisp macros transform the program structure itself, with the full language available to express such transformations. Common Lisp and Scheme differ in their macro systems: Scheme's is based on pattern matching, while Common Lisp macros are functions that explicitly construct sections of the program.
Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros.
Macros also make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient.[6]
[edit] Macros for machine-independent software
Macros are normally used to map a short string (macro invocation) to a longer sequence of instructions. Another, less common, use of macros is to do the reverse: to map a sequence of instructions to a macro string. This was the approach taken by the STAGE2 Mobile Programming System, which used a rudimentary macro compiler (called SIMCMP) to map the specific instruction set of a given computer to counterpart machine-independent macros. Applications (notably compilers) written in these machine-independent macros can then be run without change on any computer equipped with the rudimentary macro compiler. The first application run in such a context is a more sophisticated and powerful macro compiler, written in the machine-independent macro language. This macro compiler is applied to itself, in a bootstrap fashion, to produce a compiled and much more efficient version of itself. The advantage of this approach is that complex applications can be ported from one computer to a very different computer with very little effort (for each target machine architecture, just the writing of the rudimentary macro compiler).[7][8] The advent of modern programming languages, notably C, for which compilers are available on virtually all computers, has rendered such an approach superfluous. This was, however, one of the first instances (if not the first) of compiler bootstrapping.
Magnetic Ink Character Reader (MICR)
Magnetic Ink Character Recognition, or MICR, is a character recognition technology used primarily by the banking industry to facilitate the processing of cheques. The technology allows computers to read information (such as account numbers) off of printed documents. Unlike barcodes or similar technologies, however, MICR codes can be easily read by humans.
MICR characters are printed in special typefaces with a magnetic ink or toner, usually containing iron oxide. As a machine decodes the MICR text, it first magnetizes the characters in the plane of the paper. Then the characters are then passed over a MICR read head, a device similar to the playback head of a tape recorder. As each character passes over the head it produces a unique waveform that can be easily identified by the system.
The use of magnetic printing allows the characters to be read reliably even if they have been overprinted or obscured by other marks, such as cancellation stamps. The error rate for the magnetic scanning of a typical check is smaller than with optical character recognition systems. For well printed MICR documents, the "can't read" rate is usually less than 1% while the substitution rate (misread rate) is in the order of 1 per 100,000 characters.
Management Information System (MIS)
A management information system (MIS) is a subset of the overall internal controls of a business covering the application of people, documents, technologies, and procedures by management accountants to solve business problems such as costing a product, service or a business-wide strategy. Management information systems are distinct from regular information systems in that they are used to analyze other information systems applied in operational activities in the organization.[1] Academically, the term is commonly used to refer to the group of information management methods tied to the automation or support of human decision making, e.g. Decision Support Systems, Expert systems, and Executive information systems.[1]
It has been described as, "MIS 'lives' in the space that intersects technology and business. MIS combines tech with business to get people the information they need to do their jobs better/faster/smarter. Information is the lifeblood of all organizations - now more than ever. MIS professionals work as systems analysts, project managers, systems administrators, etc., communicating directly with staff and management across the organization." [2]
Mapping
Data mapping is the process of creating data element mappings between two distinct data models. Data mapping is used as a first step for a wide variety of data integration tasks including:
-
Data transformation or data mediation between a data source and a destination
-
Identification of data relationships as part of data lineage analysis
-
Discovery of hidden sensitive data such as the last four digits social security number hidden in another user id as part of a data masking or de-identification project
-
Consolidation of multiple databases into a single data base and identifying redundant columns of data for consolidation or elimination
For example, a company that would like to transmit and receive purchases and invoices with other companies might use data mapping to create data maps from a company's data to standardized ANSI ASC X12 messages for items such as purchase orders and invoices.
X12 standards are generic Electronic Data Interchange (EDI) standards designed to allow a company to exchange data with any other company, regardless of industry. The standards are maintained by the Accredited Standards Committee X12 (ASC X12), with the American National Standards Institute (ANSI) accredited to set standards for EDI. The X12 standards are often called ANSI ASC X12 standards.
In the future, tools based on semantic web languages such as Resource Description Framework (RDF), the Web Ontology Language (OWL) and standardized metadata registry will make data mapping a more automatic process. This process will be accelerated if each application performed metadata publishing. Full automated data mapping is a very difficult problem (see Semantic translation).
Hand-coded, graphical manual
Data mappings can be done in a variety of ways using procedural code, creating XSLT transforms or by using graphical mapping tools that automatically generate executable transformation programs. These are graphical tools that allow a user to "draw" lines from fields in one set of data to fields in another. Some graphical data mapping tools allow users to "Auto-connect" a source and a destination. This feature is dependent on the source and destination data element name being the same. Transformation programs are automatically created in SQL, XSLT, Java programming language or C++. These kinds of graphical tools are found in most ETL Tools (Extract, Transform, Load Tools) as the primary means of entering data maps to support data movement.
Share with your friends: |