There are three basic types of strict cryptanalysis characterised by what the cryptanalyst knows:
Often the cryptanalyst either will know some of the plaintext or will be able to guess at, and exploit, a likely element of the text, such as an encrypted letter beginning with "Dear Sir" or a computer session starting with "LOGIN:". The last category (chosen plaintext or ciphertext) represents the most favourable situation for the cryptanalyst in which he can cause either the transmitter to encrypt a plaintext of his choice or the receiver to decrypt a ciphertext that he chose. For singlekey (secret key) cryptography there is no significant difference between chosen plaintext and chosen ciphertext if the key is known, but in twokey cryptography it is possible for one of the encryption or decryption functions to be secure against chosen input (either plain or encrypted) while the other is vulnerable.
Because of its reliance on "hard" mathematical problems as a basis for crypto algorithms and because one of the keys is publicly exposed, twokey cryptography has led to a new type of cryptanalysis, which is nearly indistinguishable from research in other areas of computational mathematics. Unlike ciphertext attacks or ciphertext/plaintext pair attacks in singlekey cryptosystems, this sort of cryptanalysis is aimed at breaking the cryptosystem by analysis that can be carried out based only on a knowledge of the underlying connection between the two keys. There is no counterpart to this kind of cryptanalytic attack in singlekey systems. One of the most attractive schemes for exchanging session keys in a hybrid cryptosystem (Diffie_Hellman key exchange) depends on the ease with which a number (primitive root) could be raised to a power (in a finite field), as opposed to the difficulty of calculating the discrete logarithm. A specialpurpose chip to implement this algorithm was produced by a U.S. company, and several others designed secure electronic mail and computernetworking schemes based on the algorithm. In 1983 Donald Coppersmith[?] of IBM found a computationally feasible way to find discrete logarithms in precisely those finite fields that had been of greatest cryptographic interest, and thereby gave to the cryptanalyst a tool with which to break those cryptosystems. Similarly, the RSA encryption algorithm is no more secure than the modulus is difficult to factor, so that a breakthrough in factoring would also be a cryptanalytic breakthrough.
In 1980 one could factor a difficult 50digit number at an expense of 1,000,000,000,000 elementary computer operations (ie, add, subtract, shift, and so forth). By 1984 the state of the art in factoring algorithms had advanced to a point where a 75digit number could be factored in 1,000,000,000,000 operations. And 1,000,000,000,000 operations could be performed very much faster, too. Computer speeds may be confidently expected to continue to increase. Factoring techniques may continue do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable.
If a mathematical advance and computer speed increases were to make practically feasible factoring 150 or more digit numbers, it would be possible for a cryptanalyst to break several current commercial RSA implementations. In other words, the security of twokey cryptography depends on mathematical questions in a way that singlekey cryptography generally did not, and conversely equates cryptanalysis to mathematical research in a new way.
(The specifics of this are somewhat out of date. 150 digit numbers of the kind used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. 150 digit numbers are no longer considered enough for RSA keys. 300 digits is still considered too hard to factor in 2001, though methods will probably continue to improve over time.)
See also:
Search Encyclopedia

Featured Article
