Encyclopedia > Marvin Minsky

  Article Content

Marvin Minsky

Marvin Lee Minsky (born August 9, 1927), sometimes affectionately known as "Old Man Minsky", is an American scientist in the field of Artificial Intelligence (AI), co-founder of MIT's AI laboratory, and author of several texts on AI and philosophy.

He was born in New York. He holds degrees from Harvard and Princeton, and has taught at Harvard and MIT. He is currently Toshiba Professor of Media Arts[?] and Sciences, and Professor of Electrical engineering and Computer Science, at the Massachusetts Institute of Technology.

Minsky's patents include the first head-mounted graphical display (1963) as well as the confocal scanning microscope[?] and, jointly with Papert, the first Logo "turtle".

Minsky is an actor in an artificial intelligence koan from the Jargon file:-

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe" Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?", Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.

Selected works

  • "Neural Nets and the Brain Model Problem," Ph.D. dissertation, Princeton University, 1954. The first publication of theories and theorems about learning in neural networks, secondary reinforcement, circulating dynamic storage and synaptic modifications.
  • Computation: Finite and Infinite Machines, Prentice-Hall, 1967. A standard text in Computer Science. Out of print now, but soon to reappear.
  • Semantic Information Processing, MIT Press, 1968. This collection had a strong influence on modern computational linguistics.
  • Perceptrons, (with Seymour A. Papert), MIT Press, 1969 (Enlarged edition, 1988). Developed the modern theory of computational geometry and established fundamental limitations of loop-free connectionist learning machines. Contrary to popular belief, the result in this book are not restricted to networks with 1, 2, or 3 layers; in fact virtually every theorem is easily seen to apply to feedforward networks of any depth, with appropriate reductions in the still-exponential growth-rates! Neural net theorists should read it again, this time attending to the central scaling issues.
  • Artificial Intelligence, with Seymour Papert, Univ. of Oregon Press, 1972. Out of print.
  • Robotics, Doubleday, 1986. Edited collection of essays about robotics, with Introduction and Postscript by Minsky.
  • The Society of Mind, Simon and Schuster, 1987. The first comprehensive description of the Society of Mind theory of intellectual structure and development. See also The Society of Mind (CD-ROM version), Voyager, 1996.
  • The Turing Option, with Harry Harrison, Warner Books, New York, 1992. Science fiction thriller about the construction of a superintelligent robot in the year 2023.

External link



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Sanskrit language

... phonetic classification scheme of these writing systems see Indian language. The sounds are described here in their traditional order: vowels, stops and nasal ...

 
 
 
This page was created in 39.1 ms