Encyclopedia > Case based reasoning

  Article Content

Case-based reasoning

Redirected from Case based reasoning

Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems. An auto mechanic[?] who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents or a judge who creates case law is using case-based reasoning. So, too, an engineer copying working elements of nature (practicing biomimicry), is treating nature as a database of solutions to problems.

It has been argued that case-based reasoning is not only a powerful method for computer reasoning[?], but also a pervasive behavior in everyday human problem solving. Or, more radically, that all reasoning is based on past cases experienced or accepted by the being actively exercising choice -- prototype theory[?] -- most deeply explored in human cognitive science.

Case-based reasoning has been formalized for purposes of computer reasoning[?] as a four-step process:1

  1. Retrieve: Given a target problem, retrieve cases from memory that are relevant to solving it. A case consists of a problem, its solution, and, typically, annotations about how the solution was derived. For example, suppose Fred wants to prepare blueberry pancakes. Being a novice cook, the most relevant experience he can recall is one in which he successfully made plain pancakes. The procedure he followed for making the plain pancakes, together with justifications for decisions made along the way, constitutes Fred's retrieved case.
  2. Reuse: Map the solution from the previous case to the target problem. This may involve adapting the solution as needed to fit the new situation. In the pancake example, Fred must adapt his retrieved solution to include the addition of blueberries.
  3. Revise: Having mapped the previous solution to the target situation, test the new solution in the real world (or a simulation) and, if necessary, revise. Suppose Fred adapted his pancake solution by adding blueberries to the batter. After mixing, he discovers that the batter has turned blue -- an undesired effect. This suggests the following revision: delay the addition of blueberries until after the batter has been ladled into the pan.
  4. Retain: After the solution has been successfully adapted to the target problem, store the resulting experience as a new case in memory. Fred, accordingly, records his newfound procedure for making blueberry pancakes, thereby enriching his set of stored experiences, and better preparing him for future pancake-making demands.

At first glance, CBR may seem similar to the rule-induction algorithms of machine learning.2 Like a rule-induction algorithm[?], CBR starts with a set of cases or training examples; it forms generalizations of these examples, albeit implicit ones, by identifying commonalities between a retrieved case and the target problem. For instance, when Fred mapped his procedure for plain pancakes to blueberry pancakes, he decided to use the same basic batter and frying method, thus implicitly generalizing the set of situations under which the batter and frying method can be used. The key difference, however, between the implicit generalization in CBR and the generalization in rule induction lies in when the generalization is made. A rule-induction algorithm draws its generalizations from a set of training examples before the target problem is even known; that is, it performs eager generalization. For instance, if a rule-induction algorithm were given recipes for plain pancakes, Dutch apple pancakes, and banana pancakes as its training examples, it would have to derive, at training time, a set of general rules for making all types of pancakes. It would not be until testing time that it would be given, say, the task of cooking blueberry pancakes. The difficulty for the rule-induction algorithm is in anticipating the different directions in which it should attempt to generalize its training examples. This is in contrast to CBR, which delays (implicit) generalization of its cases until testing time -- a strategy of lazy generalization. In the pancake example, CBR has already been given the target problem of cooking blueberry pancakes; thus it can generalize its cases exactly as needed to cover this situation. CBR therefore tends to be a good approach for rich, complex domains in which there are myriad ways to generalize a case.

CBR traces its roots to the work of Roger Schank and his students at Yale University in the early 1980s. Schank's model of dynamic memory3 was the basis for the earliest CBR systems: Janet Kolodner's CYRUS4 and Michael Lebowitz's IPP.5 Other schools of CBR and closely allied fields emerged in the 1980s, investigating such topics as CBR in legal reasoning, memory-based reasoning (a way of reasoning from examples on massively parallel machines), and combinations of CBR with other reasoning methods. In the 1990s, interest in CBR grew in the international community, as evidenced by the establishment of an International Conference on Case-Based Reasoning in 1995, as well as European, German, British, Italian, and other CBR workshops. CBR technology has produced a number of successful deployed systems, the earliest being Lockheed's CLAVIER,6 a system for laying out composite parts to be baked in an industrial convection oven. CBR has been used extensively in help-desk[?] applications such as the Compaq SMART system.7 As of this writing, a number of CBR decision support tools are commercially available, including k-Commerce from eGain (formerly Inference Corporation) and Kaidara Advisor from Kaidara (formerly AcknoSoft).

See also: decision tree, genetic algorithm, pattern matching[?]


  1. Agnar Aamodt and Enric Plaza, "Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches," Artificial Intelligence Communications 7 (1994): 1, 39-52.
  2. Rule-induction algorithms are procedures for learning rules for a given concept by generalizing from examples of that concept. For example, a rule-induction algorithm might learn rules for forming the plural of English nouns from examples such as dog/dogs, fly/flies, and ray/rays.
  3. Roger Schank, Dynamic Memory: A Theory of Learning in Computers and People (New York: Cambridge University Press, 1982).
  4. Janet Kolodner, "Reconstructive Memory: A Computer Model," Cognitive Science 7 (1983): 4.
  5. Michael Lebowitz, "Memory-Based Parsing," Artificial Intelligence 21 (1983), 363-404.
  6. Bill Mark, "Case-Based Reasoning for Autoclave Management," Proceedings of the Case-Based Reasoning Workshop (1989).
  7. Trung Nguyen, Mary Czerwinski, and Dan Lee, "COMPAQ QuickSource: Providing the Consumer with the Power of Artificial Intelligence," in Proceedings of the Fifth Annual Conference on Innovative Applications of Artificial Intelligence (Washington, DC: AAAI Press, 1993), 142-151.

For Further Reading

  • Aamodt, Agnar, and Enric Plaza. "Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches." Artificial Intelligence Communications 7, no. 1 (1994): 39-52.
  • Althoff, Klaus-Dieter, Ralph Bergmann, and L. Karl Branting, eds. Case-Based Reasoning Research and Development: Proceedings of the Third International Conference on Case-Based Reasoning. Berlin: Springer Verlag, 1999.
  • Kolodner, Janet. Case-Based Reasoning. San Mateo: Morgan Kaufmann, 1993.
  • Leake, David, and Enric Plaza, eds. Case-Based Reasoning Research and Development: Proceedings of the Second International Conference on Case-Based Reasoning. Berlin: Springer Verlag, 1997.
  • Riesbeck, Christopher, and Roger Schank. Inside Case-based Reasoning. Northvale, NJ: Erlbaum, 1989.
  • Veloso, Manuela, and Agnar Aamodt, eds. Case-Based Reasoning Research and Development: Proceedings of the First International Conference on Case-Based Reasoning. Berlin: Springer Verlag, 1995.

An earlier version (http://www.nupedia.com/article/465/) of the above article was posted on Nupedia.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
East Farmingdale, New York

... for a household in the town is $68,125, and the median income for a family is $71,726. Males have a median income of $51,332 versus $32,188 for females. The per capita ...

 
 
 
This page was created in 43.5 ms