Jump to content

Conceptual dependency theory

From Wikipedia, the free encyclopedia

Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems.

Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence.[1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

Schank developed the model to represent knowledge for natural language input into computers. Partly influenced by the work of Sydney Lamb, his goal was to make the meaning independent of the words used in the input, i.e. two sentences identical in meaning, would have a single representation. The system was also intended to draw logical inferences.[2]

The model uses the following basic representational tokens:[3]

  • real world objects, each with some attributes.
  • real world actions, each with attributes
  • times
  • locations

A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". An MTRANS represents mental acts such as "tell", etc.

A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.

DESCRIPTION ACTION EXAMPLE
Transfer of abstract relationship ATRANS give
Transfer of the physical location of the object PTRANS go
Application of physical force to an object PROPEL push
Grasping of an object by an actor GRASP clutch
Movement of a body part by its owner MOVE kick

See also

[edit]

References

[edit]
  1. ^ Roger Schank, 1969, A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden pages 1-3
  2. ^ Cardiff University on Conceptual dependency theory [1]
  3. ^ Language, mind, and brain by Thomas W. Simon, Robert J. Scholes 1982 ISBN 0-89859-153-8 page 105
[edit]