HOUSE_OVERSIGHT_013115.jpg

2.52 MB

Extraction Summary

6
People
2
Organizations
0
Locations
0
Events
1
Relationships
3
Quotes

Document Information

Type: Academic text / scientific publication page
File Size: 2.52 MB
Summary

This document is a page (199) from an academic or scientific book regarding Artificial General Intelligence (AGI) and Cognitive Science. It discusses 'Theory of Mind,' comparing philosophical approaches (Theory Theory vs. Simulation Theory) and how the 'CogPrime' AI system might achieve this through embodied experience and 'PLN' (Probabilistic Logic Networks). The document bears a House Oversight Bates stamp, suggesting it was part of a document production, likely related to Jeffrey Epstein's connections to scientists and AI research.

People (6)

Name Role Context
Davidson Researcher/Author
Cited as [Dav84], supports belief that theory of mind depends on linguistic ability.
Dennett Researcher/Author
Cited as [Den87], supports belief that theory of mind depends on linguistic ability.
Premack Researcher
Cited as [PW78], challenged prevailing stance on theory of mind using primates.
Woodruff Researcher
Cited as [PW78], challenged prevailing stance on theory of mind using primates.
Gordon Researcher
Cited as [Gor86], postulates theory of mind relates to cognitive simulations.
Piaget Psychologist
Mentioned in the chapter header regarding stages of development.

Organizations (2)

Name Type Context
CogPrime
An Artificial General Intelligence system discussed throughout the text.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013115' at the bottom.

Relationships (1)

Premack Research Partners Woodruff
Cited together as [PW78] regarding primate experiments.

Key Quotes (3)

"We have thought through the details by CogPrime system should be able to develop theory of mind via embodied experience"
Source
HOUSE_OVERSIGHT_013115.jpg
Quote #1
"In an uncertain AGI context, both theories and simulations are grounded in collections of uncertain implications"
Source
HOUSE_OVERSIGHT_013115.jpg
Quote #2
"Recognizing 'embodied agent' as a category, however, is a problem fairly similar to recognizing 'block' or 'insect' or 'daisy' as a category."
Source
HOUSE_OVERSIGHT_013115.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (4,067 characters)

11.4 Piaget's Stages in the Context of Uncertain Inference 199
Davidson [Dav84], Dennett [Den87] and others support the common belief that theory of mind is dependent upon linguistic ability. A major challenge to this prevailing philosophical stance came from Premack and Woodruff [PW78] who postulated that prelinguistic primates do indeed exhibit "theory of mind" behavior. While Premack and Woodruff's experiment itself has been challenged, their general result has been bolstered by follow-up work showing similar results such as [TC97]. It seems to us that while theory of mind depends on many of the same inferential capabilities as language learning, it is not intrinsically dependent on the latter.
There is a school of thought often called the Theory Theory [BW88, Car85, Wel90] holding that a child's understanding of mind is best understood in terms of the process of iteratively formulating and refuting a series of naive theories about others. Alternately, Gordon [Gor86] postulates that theory of mind is related to the ability to run cognitive simulations of others' minds using one's own mind as a model. We suggest that these two approaches are actually quite harmonious with one another. In an uncertain AGI context, both theories and simulations are grounded in collections of uncertain implications, which may be assembled in context-appropriate ways to form theoretical conclusions or to drive simulations. Even if there is a special "mind-simulator" dynamic in the human brain that carries out simulations of other minds in a manner fundamentally different from explicit inferential theorizing, the inputs to and the behavior of this simulator may take inferential form, so that the simulator is in essence a way of efficiently and implicitly producing uncertain inferential conclusions from uncertain premises.
We have thought through the details by CogPrime system should be able to develop theory of mind via embodied experience, though at time of writing practical learning experiments in this direction have not yet been done. We have not yet explored in detail the possibility of giving CogPrime a special, elaborately engineered "mind-simulator" component, though this would be possible; instead we have initially been pursuing a more purely inferential approach.
First, it is very simple for a CogPrime system to learn patterns such as "If I rotated by pi radians, I would see the yellow block." And it's not a big leap for PLN to go from this to the recognition that "You look like me, and you're rotated by pi radians relative to my orientation, therefore you probably see the yellow block." The only nontrivial aspect here is the "you look like me" premise.
Recognizing "embodied agent" as a category, however, is a problem fairly similar to recognizing "block" or "insect" or "daisy" as a category. Since the CogPrime agent can perceive most parts of its own "robot" body-its arms, its legs, etc.-it should be easy for the agent to figure out that physical objects like these look different depending upon its distance from them and its angle of observation. From this it should not be that difficult for the agent to understand that it is naturally grouped together with other embodied agents (like its teacher), not with blocks or bugs.
The only other major ingredient needed to enable theory of mind is "reflection"-the ability of the system to explicitly recognize the existence of knowledge in its own mind (note that this term "reflection" is not the same as our proposed "reflexive" stage of cognitive development). This exists automatically in CogPrime, via the built-in vocabulary of elementary procedures supplied for use within SchemaNodes (specifically, the atTime and TruthValue operators). Observing that "at time T, the weight of evidence of the link L increased from zero" is basically equivalent to observing that the link L was created at time T.
Then, the system may reason, for example, as follows (using a combination of several PLN rules including the above-given deduction rule):
HOUSE_OVERSIGHT_013115

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document