Compiler: Difference between revisions
imported>Pat Palmer (→How a compiler translates: adding optimization placeholders) |
mNo edit summary |
||
(38 intermediate revisions by 11 users not shown) | |||
Line 1: | Line 1: | ||
{{subpages}} | |||
=Input and | A '''compiler''' is a program that translates a human-readable, plain text [[computer]] program, called [[source code]], into a less human-readable [[machine code]]. In [[Computer science]] terms, a compiler is a [[translation system]] for a [[computer]] that can automatically translate between any two [[formal language]]s (i.e., [[languages]] which are fully specified so there can be no ambiguity, as opposed to [[natural language]]s spoken by people). A formal language specification, together with a compiler which creates machine code for a [[computer]], constitutes a [[programming language]]. | ||
The very first electronic [[computer]]s had to be programmed without the benefit of a compiler. The first implementation of a compiler, as well as the very idea for a compiled language, was invented by [[Dr. Grace Murray Hopper]], a Harvard mathematics professor and early programmer of the [[History_of_computing#Harvard_Mark_I_.281943.29|Mark I computer]]. Hopper, a pioneer along with several other women working on [[History_of_computing#The_first_electronic_computers_.281940.27s.29|early computers]], arguably can be credited with launching the field of [[programming languages]]<ref name="Hopper1">{{cite book|url=http://www.amazon.com/Portraits-Silicon-Robert-Slater/dp/0262691310|title="Portraits in Silicon" by Robert Slater, ch. 20, p. 219|publisher=The MIT Press|year=1987}}</ref>. The [[history of compilers]] is deserving of its own article. | |||
==Input and output== | |||
The input to a compiler is a file (or files) containing a program in a source language. The source file is likely to be a human-readable [[programming language]], though it could be any unambiguous representation of an [[algorithm]], such as a [[flow chart]] or other representation of a [[finite state machine]]. The output of a compiler is a different file containing code in a target language, often a low-level [[machine language]], though it could just as well be another high-level language. | The input to a compiler is a file (or files) containing a program in a source language. The source file is likely to be a human-readable [[programming language]], though it could be any unambiguous representation of an [[algorithm]], such as a [[flow chart]] or other representation of a [[finite state machine]]. The output of a compiler is a different file containing code in a target language, often a low-level [[machine language]], though it could just as well be another high-level language. | ||
=Two levels of compilation= | ==Two levels of compilation== | ||
Most modern programming languages perform compilation in two stages, first from the source language to an intermediate language (typically an assembler), and second from the intermediate language to machine code. In so-called ''managed'' programming languages such as [[Java]] and [[C sharp|C#]], the second compilation is postponed until right before the program needs to execute, in which case it is called "just-in-time" compilation. | Most modern programming languages perform compilation in two stages, first from the source language to an intermediate language (typically an assembler), and second from the intermediate language to machine code. In so-called ''managed'' programming languages such as [[Java]] and [[C sharp|C#]], the second compilation is postponed until right before the program needs to execute, in which case it is called "just-in-time" compilation. | ||
=How a compiler translates= | ==How a compiler translates== | ||
The | The tasks which a compiler must accomplish include the following: | ||
# [[lexical analysis|Lexical Analysis or Scanning]], in which the input characters are recognized by a set of [[regular expressions]] and output as a sequence of [[token|tokens]]. | # [[lexical analysis|Lexical Analysis or Scanning]], in which the input characters are ''recognized'' ([[parsed]]), usually by a set of [[regular expressions]], and output as a sequence of [[token|tokens]]. | ||
# [[syntactic analysis|Syntactical Analysis or Parsing]], in which the input tokens are recognized by a set of [[pushdown automatons]] and output a sequence of semantic actions. | # [[syntactic analysis|Syntactical Analysis or Parsing]], in which the input tokens are recognized by a set of [[pushdown automatons]] and output a sequence of semantic actions. | ||
# [[semantic analysis|Semantic Analysis]], in which each semantic action builds an internal or intermediate representation of the source program, and [[context sensitive]] errors (any error that cannot be discriminated by a [[context-free language]]) are detected. | # [[semantic analysis|Semantic Analysis]], in which each semantic action builds an internal or intermediate representation of the source program, and [[context sensitive]] errors (any error that cannot be discriminated by a [[context-free language]]) are detected. | ||
# [[code generation|Code Generation]], in which the intermediate language is translated a piece at a time to the target language. | |||
# [[code generation|Code Generation]], in which the intermediate language is translated piece at a time to the target language | |||
In actuality, there may be multiple optimization stages scattered throughout this process. Additionally, most modern compilers repeatedly translate the language from an intermediate representation to a simpler intermediate representation in order to | In actuality, there may be multiple optimization stages scattered throughout this process. Additionally, most modern compilers repeatedly translate the language from an intermediate representation to a simpler intermediate representation in order to accommodate a wide swath of optimizations that operate on different levels of detail. | ||
=== Lexical analysis === | |||
During lexical analysis, a set of regular expressions translate the input sequence (generally characters) into an output sequence (called tokens). One popular tool to simplify the creation of lexical analyzers is a software package called [[lex]]. | During lexical analysis, a set of regular expressions translate the input sequence (generally characters) into an output sequence (called tokens). One popular tool to simplify the creation of lexical analyzers is a software package called [[lex]]. | ||
Line 25: | Line 27: | ||
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A lexical analyzer could detect errors in a single token, for instance a number that has the letter 'y' in it, or a string with a missing end quote. | Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A lexical analyzer could detect errors in a single token, for instance a number that has the letter 'y' in it, or a string with a missing end quote. | ||
=== Syntactic analysis === | |||
During syntactic analysis, an input sequence of tokens is matched against a set of | During syntactic analysis, an input sequence of tokens is matched against a set of grammatical constructs called [[productions]]. As each production is matched, a semantic action routine is called. The role of each semantic action is to build an intermediate representation of the input program, such as a list of variables and functions, and a sequence of instructions comprising each function. | ||
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A Syntactic analyzer could detect a syntactic error, such as a missing semicolon or curly brace. A syntactic analyzer cannot detect the use of an undeclared variable. This is because the declaration of a variable before its use is a [[context sensitive]] | Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A Syntactic analyzer could detect a syntactic error, such as a missing semicolon or curly brace. A syntactic analyzer cannot detect the use of an undeclared variable. This is because the declaration of a variable before its use is a [[context sensitive]] language requirement, though syntactic analyzers are generally [[context-free]] language recognizers. | ||
=== Semantic analysis === | |||
During semantic analysis, a compiler builds and examines an intermediate representation of the source program and checks it for consistency. | During semantic analysis, a compiler builds and examines an intermediate representation of the source program and checks it for consistency. | ||
Line 37: | Line 39: | ||
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A semantic analyzer could detect errors, such as undeclared variables or functions. | Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A semantic analyzer could detect errors, such as undeclared variables or functions. | ||
=== Code generation === | |||
* [[address mode]] | * [[address mode]] | ||
* [[application binary interface|application binary interface (ABI]] | * [[application binary interface|application binary interface (ABI]] | ||
Line 50: | Line 52: | ||
* [[stack frame]] | * [[stack frame]] | ||
== | == Optimizations == | ||
Optimizations are ''optional'' strategies which a compiler may use when emitting output code. Optimizations may be used to improve code execution speed or memory usage, but only if the performance can be improved without sacrificing the correctness of the translation. | |||
* [[alias analysis]] | * [[alias analysis]] | ||
Line 58: | Line 59: | ||
* [[constant folding]] | * [[constant folding]] | ||
* [[copy propagation]] | * [[copy propagation]] | ||
* [[common subexpression elimination]] | |||
* [[dead assignment elimination]] | |||
* [[dead code elimination]] | * [[dead code elimination]] | ||
* [[function inlining]] | * [[function inlining]] | ||
Line 63: | Line 66: | ||
* [[function inlining|inlining]] | * [[function inlining|inlining]] | ||
* [[loop optimization]] | * [[loop optimization]] | ||
* [[loop peeling]] | ** [[loop peeling]] | ||
* [[loop unrolling]] | ** [[loop unrolling]] | ||
* [[ | ** [[code motion]] | ||
* [[ | ** [[induction-variable elimination]] | ||
* [[ | ** [[reduction in strength]] | ||
* [[peephole optimization]] - analyses the output over a small region (the peephole), searching for localized improvements | |||
* [[tail call optimization]][[Category:Suggestion Bot Tag]] | |||
* [[ | |||
* [[ | |||
[[Category: |
Latest revision as of 11:00, 31 July 2024
A compiler is a program that translates a human-readable, plain text computer program, called source code, into a less human-readable machine code. In Computer science terms, a compiler is a translation system for a computer that can automatically translate between any two formal languages (i.e., languages which are fully specified so there can be no ambiguity, as opposed to natural languages spoken by people). A formal language specification, together with a compiler which creates machine code for a computer, constitutes a programming language.
The very first electronic computers had to be programmed without the benefit of a compiler. The first implementation of a compiler, as well as the very idea for a compiled language, was invented by Dr. Grace Murray Hopper, a Harvard mathematics professor and early programmer of the Mark I computer. Hopper, a pioneer along with several other women working on early computers, arguably can be credited with launching the field of programming languages[1]. The history of compilers is deserving of its own article.
Input and output
The input to a compiler is a file (or files) containing a program in a source language. The source file is likely to be a human-readable programming language, though it could be any unambiguous representation of an algorithm, such as a flow chart or other representation of a finite state machine. The output of a compiler is a different file containing code in a target language, often a low-level machine language, though it could just as well be another high-level language.
Two levels of compilation
Most modern programming languages perform compilation in two stages, first from the source language to an intermediate language (typically an assembler), and second from the intermediate language to machine code. In so-called managed programming languages such as Java and C#, the second compilation is postponed until right before the program needs to execute, in which case it is called "just-in-time" compilation.
How a compiler translates
The tasks which a compiler must accomplish include the following:
- Lexical Analysis or Scanning, in which the input characters are recognized (parsed), usually by a set of regular expressions, and output as a sequence of tokens.
- Syntactical Analysis or Parsing, in which the input tokens are recognized by a set of pushdown automatons and output a sequence of semantic actions.
- Semantic Analysis, in which each semantic action builds an internal or intermediate representation of the source program, and context sensitive errors (any error that cannot be discriminated by a context-free language) are detected.
- Code Generation, in which the intermediate language is translated a piece at a time to the target language.
In actuality, there may be multiple optimization stages scattered throughout this process. Additionally, most modern compilers repeatedly translate the language from an intermediate representation to a simpler intermediate representation in order to accommodate a wide swath of optimizations that operate on different levels of detail.
Lexical analysis
During lexical analysis, a set of regular expressions translate the input sequence (generally characters) into an output sequence (called tokens). One popular tool to simplify the creation of lexical analyzers is a software package called lex.
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A lexical analyzer could detect errors in a single token, for instance a number that has the letter 'y' in it, or a string with a missing end quote.
Syntactic analysis
During syntactic analysis, an input sequence of tokens is matched against a set of grammatical constructs called productions. As each production is matched, a semantic action routine is called. The role of each semantic action is to build an intermediate representation of the input program, such as a list of variables and functions, and a sequence of instructions comprising each function.
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A Syntactic analyzer could detect a syntactic error, such as a missing semicolon or curly brace. A syntactic analyzer cannot detect the use of an undeclared variable. This is because the declaration of a variable before its use is a context sensitive language requirement, though syntactic analyzers are generally context-free language recognizers.
Semantic analysis
During semantic analysis, a compiler builds and examines an intermediate representation of the source program and checks it for consistency.
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A semantic analyzer could detect errors, such as undeclared variables or functions.
Code generation
- address mode
- application binary interface (ABI
- instruction scheduling
- instruction set architecture
- intermediate representation
- memory hierarchy
- register
- register allocation
- register allocation by graph coloring
- retarget
- stack frame
Optimizations
Optimizations are optional strategies which a compiler may use when emitting output code. Optimizations may be used to improve code execution speed or memory usage, but only if the performance can be improved without sacrificing the correctness of the translation.
- alias analysis
- algebraic simplification
- constant folding
- copy propagation
- common subexpression elimination
- dead assignment elimination
- dead code elimination
- function inlining
- function specialization
- inlining
- loop optimization
- peephole optimization - analyses the output over a small region (the peephole), searching for localized improvements
- tail call optimization
- ↑ (1987) "Portraits in Silicon" by Robert Slater, ch. 20, p. 219. The MIT Press.