Encyclopedia > Lexical analyzer

  Article Content

Lexical analysis

Redirected from Lexical analyzer

Lexical analysis is the process of taking an input string of characters (such as the source code of a computer program) and producing a sequence of symbols called "lexical tokens[?]", or just "tokens[?]", which may be handled more easily by a parser.

A lexical analyzer, or lexer for short, typically has two stages. The first stage is called the scanner and is usually based on a finite state machine. It reads through the input one character at a time, changing states based on what characters it encounters. If it lands on an accepting state, it takes note of the type and position of the acceptance, and continues. Eventually it lands on a "dead state," which is a non-accepting state which goes only to itself on all characters. When the lexical analyzer lands on the dead state, it is done; it goes back to the last accepting state, and thus has the type and length of the longest valid lexeme.

A lexeme, however, is only a string of characters known to be of a certain type. In order to construct a token, the lexical analyzer needs a second stage. This stage, the evaluator, goes over the characters of the lexeme to produce a value. The lexeme's type combined with its value is what properly constitutes a token, which can be given to a parser. (Some tokens such as parentheses do not really have values, and so the evaluator function for these can return nothing. The evaluators for integers, identifiers, and strings can be considerably more complex. Sometimes evaluators can suppress a lexeme entirely, concealing it from the parser, which is useful for whitespace and comments.)

For example, in the source code of a computer program the string

 net_worth_future = (assets - liabilities);

might be converted (with whitespace suppressed) into the lexical token stream:

 NAME "net_worth_future" 
 EQUALS 
 OPEN_PARENTHESIS 
 NAME "assets" 
 MINUS 
 NAME "liabilities" 
 CLOSE_PARENTHESIS 
 SEMICOLON

Lexical analysis makes writing a parser much easier. Instead of having to build up names such as "net_worth_future" from their individual characters, the parser can start with tokens and concern itself only with syntactical matters. This leads to efficiency of programming, if not efficiency of execution. However, since the lexical analyzer is the subsystem that must examine every single character of the input, it can be a compute-intensive step whose performance is critical, such as when used in a compiler.

Though it is possible and sometimes necessary to write a lexer by hand, lexers are often generated by automated tools. These tools accept regular expressions which describe the tokens allowed in the input stream. Each regular expression is associated with a phrase in a programming language which will evaluate the lexemes that match the regular expression. The tool then constructs a state table for the appropriate finite state machine and creates program code which contains the table, the evaluation phrases, and a routine which uses them appropriately.

Regular expressions compactly represent patterns that the characters in lexemes might follow. For example, a NAME token might be any alphabetical character or an underscore, followed by any number of instances of any alphanumeric character or an underscore. This could be represented compactly by the string [a-zA-Z_][a-zA-Z_0-9]*. This means "any character a-z, A-Z or _, then 0 or more of a-z, A-Z, _ or 0-9".

Regular expressions and the finite state machines they generate are not capable of handling recursive patterns, such as "n opening parentheses, followed by a statement, followed by n closing parentheses." They are not capable of keeping count, and verifying that n is the same on both sides -- unless you have a finite set of permissible values for n. It takes a full-fledged parser to recognize such patterns in their full generality. A parser can push parentheses on a stack and then try to pop them off and see if the stack is empty at the end.

The Lex programming language and its compiler is designed to generate code for fast lexical analysers based on a formal description of the lexical syntax. It is not generally considered sufficient for applications with a complicated set of lexical rules and severe performance requirements; instance the GNU Compiler Collection uses hand-written lexers.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Springs, New York

... age of 18 living with them, 51.2% are married couples living together, 8.9% have a female householder with no husband present, and 34.9% are non-families. 26.1% of all ...

 
 
 
This page was created in 22.6 ms