Encyclopedia > Formal grammar

  Article Content

Formal grammar

In computer science a formal grammar is a way to describe a formal language, i.e., a set of finite-length strings over a certain finite alphabet. They are named formal grammars by analogy with the concept of grammar for human languages.

The basic idea behind these grammars is that we generate strings by beginning with a special start symbol and then apply rules that indicate how certain combinations of symbols may be replaced with other combinations of symbols. For example, assume the alphabet consists of 'a' and 'b', the start symbol is 'S' and we have the following rules:

1. S -> aSb
2. S -> ba

then we can rewrite "S" to "aSb" by replacing 'S' with "aSb" (rule 1), and we can then rewrite "aSb" to "aaSbb" by again applying the same rule. This is repeated until the result contains only symbols from the alphabet. In our example we can rewrite S as follows: S -> aSb -> aaSbb -> aababb. The language of the grammar then consists of all the strings that can be generated that way; in this case: ba, abab, aababb, aaababbb, etc.

Table of contents

Formal definition

A formal grammar G consists of the following components:

  • A finite set N of nonterminal symbols.
  • A finite set Σ of terminal symbols that is disjoint from N.
  • A finite set P of production rules where a rule is of the form
string in (Σ U N)* -> string in (Σ U N)*

(where * is the Kleene star and U is set union) with the restriction that the left-hand side of a rule (i.e., the part to the left of the ->) must contain at least one nonterminal symbol.
  • A symbol S in N that is indicated as the start symbol.
Usually such a formal grammar G is simply summarized as (N, Σ, P, S).

The language of a formal grammar G = (N, Σ, P, S), denoted as L(G), is defined as all those strings over Σ that can be generated by starting with the start symbol S and then applying the production rules in P until no more nonterminal symbols are present.

Example

Consider, for example, the grammar G with N = {S, B}, Σ = {a, b, c}, P consisting of the following production rules

1. S -> aBSc
2. S -> abc
3. Ba -> aB
4. Bb -> bb

and the nonterminal symbol S as the start symbol. Some examples of the derivation of strings in L(G) are: (The used production rules are indicated in brackets and replaced part is each time indicated in bold.)

  • S -> (2) abc
  • S -> (1) aBSc -> (2) aBabcc -> (3) aaBbcc -> (4) aabbcc
  • S -> (1) aBSc -> (1) aBaBScc -> (2) aBaBabccc -> (3) aaBBabccc -> (3) aaBaBbccc -> (3) aaaBBbccc -> (4) aaaBbbccc -> (4) aaabbbccc
It will be clear that this grammar defines the language { anbncn | n > 0 } where an denotes a string of n a's.

Formal grammars are identical to Lindenmayer systems (L-systems), except that L-systems are not affected by a distinction between terminals and nonterminals, L-systems have restrictions on the order in which the rules are applied, and L-systems can run forever, generating an infinite sequence of strings. Typically, each string is associated with a set of points in space, and the "output" of the L-system is defined to be the limit of those sets.

Classes of grammars

Some restricted classes of grammars, and the languages that can be derived with them, have special names and are studied separately. One common classification system for grammars is the Chomsky hierarchy, a set of four types of grammars developed by Noam Chomsky in the 1950s. The difference between these types is that they have increasingly stricter production rules and can express less formal languages. Two important types are context-free grammars and regular grammars. The languages that can be described with such a grammar are called context-free languages and regular languages, respectively. Although much less powerful than unrestricted grammars, which can in fact express any language that can be accepted by a Turing machine, these two types of grammars are most often used because parsers for them can be efficiently implemented. For example, for context-free grammars there are well-known algorithms to generate efficient LL parsers and LR parsers.

Context-free grammars

In context-free grammars, the left hand side of a production rule may only be formed by a single non-terminal symbol. The language defined above is not a context-free language, but for example the language { anbn | n > 0 } is, as it can be defined by the grammar G2 with N={S}, Σ={a,b}, S the start symbol, and the following production rules:

1. S -> aSb
2. S -> ab

Regular grammars

In regular grammars, the left hand side is again only a single non-terminal symbol, but now the right-hand side is also restricted: It may be nothing, or a single terminal symbol, or a single terminal symbol followed by a non-terminal symbol, but nothing else (sometimes a broader definition is used, one can allow longer strings of terminals or single non-terminals without anything else while still defining the same class of languages).

The language defined above is not regular, but the language { anbm | m,n > 0 } is, as it can be defined by the grammar G3 with N={S,A,B}, Σ={a,b}. S the start symbol, and the following production rules:

1. S -> aA
2. A -> aA
3. A -> bB
4. B -> bB
5. B -> ε

Terminology

Yet to write

  • concrete syntax tree, abstract syntax tree
  • left derivation, right derivation
  • ambiguous grammar



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Eurofighter

... EF 2000). Despite many delays and controversies over cost, the Typhoon is now in series production. Current orders for the participating nations are 232 for the ...

 
 
 
This page was created in 23 ms