Artificial Cognitive Systems

Document technical information

Format pdf
Size 4.5 MB
First found May 22, 2018

Document content analysis

Category Also themed
Language
English
Type
not defined
Concepts
no text concepts found

Persons

James Dewar
James Dewar

wikipedia, lookup

Organizations

Places

Transcript

Artificial Cognitive Systems
David Vernon
Professor of Informatics
University of Skövde
Sweden
[email protected]
www.vernon.eu
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 1
Cognitive Architectures
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 2
Topic Overview
–  What is a cognitive architecture?
•  The cognitivist perspective
•  The emergent perspective
–  Desirable characteristics
• 
• 
• 
• 
• 
• 
Realism
Behavioural characteristics
Cognitive characteristics
Functional capabilities
Development
Dynamics
–  Designing a cognitive architecture
–  Example cognitive architectures
• 
• 
• 
• 
Soar
Darwin
ISAC
CLARION
–  Cognitive architectures: what next?
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 3
COGNITION
Cognitivist
Systems
Hybrid
Systems
Cognitive
Architecture
The term originated with the
work of [Newell et al. 1982]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Emergent
Systems
Enactive
Approaches
Connectionist
Approaches
Dynamical
Approaches
[Vernon, Metta,
Sandini
Chapter
3, Slide2007]
4
COGNITION
Address many aspects of
cognition
Cognitivist
Systems
Cognitive
Architecture
The term Cognitive Architecture
originated with the work of
[Newell et al. 1982]
Unified Theory of Cognition
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Attention
Memory
Problem solving
Decision making
Learning
…
from several perspectives
Psychology
Neuroscience
Chapter 3, Slide 5
COGNITION
Cognitivist
Systems
Cognitive
Architecture
An embodiment of a scientific
hypothesis about those
aspects of human cognition
that are
constant over time
independent of task
The term Cognitive Architecture
originated with the work of
[Newell et al. 1982]
[Ritter & Young 2001]
Unified Theory of Cognition
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 6
COGNITION
Cognitivist
Systems
Cognitive
Architecture
Cognitive Architecture + Knowledge =
Cognitive Model
[Lehman et al 97, also Anderson & Labiere 98, Newell 90]
The term Cognitive Architecture
originated with the work of
[Newell et al. 1982]
Unified Theory of Cognition
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 7
Cognitivist Cognitive Architecture
Overall structure and organization of a cognitive system
• 
Essential Modules
• 
Essential relations between these modules
• 
Essential algorithmic and representational details in each module
[Sun 2007]
[GMU-BICA Architecture: Samsonovich 2005]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 8
Cognitivist Cognitive Architecture
Commitment to formalisms for
–  Short-term & long-term memories that store the agent’s beliefs,
goals, and knowledge
–  Representation & organization of structures embedded in memory
–  Functional processes that operate on these structures
• 
• 
Performance / utilization
Learning
–  Programming language to construct systems embodying the
architectures assumptions
[Langley 05, Langley 06, Langley et al. 09]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 9
Emergent Cognitive Architecture
Emergent approaches focus on development
–  From a primitive state
–  To fully cognitive state, over the system’s lifetime
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 10
Emergent Cognitive Architecture
•  Two different views of development
–  Individual
–  Social
•  Two different theories of cognitive development
–  Jean Piaget (1896–1980)
–  Lev Vygotsky (1896–1934)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 11
Emergent Cognitive Architecture
The cognitive architecture is the system’s
phylogenetic configuration
–  The basis for ontogenesis: growth and development
• 
• 
Innate skills
Core knowledge (cf. Spelke)
–  A structure in which to embed mechanisms for
• 
• 
• 
• 
• 
• 
Perception
Action
Adaptation
Anticipation
Motivation
… Development of all these
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 12
Cognitive Architecture:
phylogeny as the basis for ontogeny
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 13
Emergent Cognitive Architecture
Focus on
–  Autonomy-preserving anticipatory and adaptive
skill construction
–  The morphology of the physical body
in which the architecture is embedded
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 14
Desirable Characteristics of a
Cognitive Architecture
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 15
Desirable Characteristics
Desiderata for Cognitive Architectures [Sun 2004]
1.  Ecological realism
2.  Bio-evolutionary realism
3.  Cognitive realism
4.  Inclusiveness of prior perspectives
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 16
Desirable Characteristics
Concurrent conflicting goals
Ecological realism
Everyday activities
Embodied
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
[Sun 2004]
Chapter 3, Slide 17
Desirable Characteristics
Bio-evolutionary realism
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Human intelligence reducible to
model of animal intelligence
[Sun 2004]
Chapter 3, Slide 18
Desirable Characteristics
Human psychology
Cognitive realism
Human neuroscience
Philosophy
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 19
Desirable Characteristics
Draw on older models
Prior perspectives
Subsume older models
Supercede older models
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
[Sun 2004]
Chapter 3, Slide 20
Desirable Characteristics
Act & React …
Simple conceptual schemas
Behavioural Characteristics
Simple weighing of alternatives
Temporal sequence of actions
Gradually-learned routine behaviours
… trial-and-error adaptation
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
[Sun 2004]
Chapter 3, Slide 21
Desirable Characteristics
Implicit bottom-up learning
Cognitive Characteristics
Explicit symbolic learning
Functional or physical modularity
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
[Sun 2004]
Chapter 3, Slide 22
Desirable Characteristics
Cognitive architectures: Research issues and challenges
Cognitive architectures:
Research issues and
challenges
1. 
2. 
3. 
4. 
5. 
6. 
7. 
8. 
9. 
Recognition & categorization
Decision-making & choice
Perception & situation assessment
Prediction & monitoring
Problem solving & planning
Reasoning & belief maintenance
Execution & action
Interaction & communication
Remembering, reflection, & learning
[Langley et al. 2009]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 23
Desirable Characteristics
The importance of cognitive architectures …
The importance of
Cognitive architectures:
An analysis based
On Clarion
1.  Perception
2.  Categorization
3.  Multiple representations
4.  Multiple types of memory
5.  Decision making
6.  Reasoning
7.  Planning
8.  Problem solving
9.  Meta-cognition
10. Communication
11. Action control and execution
12. Several types of learning
[Sun 2007]
The importance of the interconnectivity between these processes
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 24
Desirable Characteristics
Cogaff Cognitive Architecture Schema
H-Cogaff Cognitive Architecture
[Sloman 2000]
[Sloman 2001]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 25
Desirable Characteristics
Cognitive Architectures of Developmental Systems
[Krichmar & Edelman 2005, 2006]
1. 
Address connectivity and interaction between circuits/regions in the brain
2. 
Effect perceptual categorization, without a priori knowledge
(a model generator, rather than a model fitter, cf [Weng 04])
3. 
Embodied & capable of exploration
4. 
Minimal set of innate behaviours
5. 
Value system (set of motivations) to govern development
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 26
Facets of a Cognitive Architecture
• 
Component functionality
• 
Component interconnectivity
• 
System dynamics
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 27
Facets of a Cognitive Architecture
• 
Component functionality
–  Specification
• 
• 
• 
• 
• 
• 
• 
• 
• 
Functionality
Theoretical foundations
Computational model
Information representation
Functional model (e.g. functional decomposition)
Data model (e.g. data dictionary or ER diagram)
Process-flow model (e.g. DFD diagram)
Behavioural model (e.g. State transition diagram)
Interface: input, output, protocols
–  Design choices: algorithms and data-structures
–  Implementation
–  API specification
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 28
Facets of a Cognitive Architecture
• 
Component interconnectivity
–  Data flow
–  Control flow
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 29
Facets of a Cognitive Architecture
• 
System dynamics
–  Cognitivist cognitive architectures
• 
Add knowledge to determine the dynamics and flow of information
–  Emergent cognitive architectures
• 
Not so straightforward … can’t just add knowledge
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 30
Facets of a Cognitive Architecture
• 
System dynamics
–  Emergent cognitive architectures
• 
Dynamics result from interaction between the components
–  Driven by an embedded value system that governs the developmental
process
–  Not by explicit rules that encapsulate prior declarative and procedural
knowledge
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 31
Facets of a Cognitive Architecture
• 
System dynamics
–  Emergent cognitive architectures
• 
Need to specify the interactions between components
–  Small ensembles (at least)
–  Whole system (ideally)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 32
Facets of a Cognitive Architecture
• 
System dynamics
–  Emergent cognitive architectures
• 
This is a tough challenge
–  Assemblies of loosely-coupled concurrent processes
–  Operating asynchronously
–  Without a central control unit
–  Dynamics depend on circular causality
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 33
•  Organizational decomposition
•  Explicit inter-connectivity
•  Representational formalism
•  Algorithmic formalism
COGNITION
Cognitivist
Systems
Hybrid
Systems
•  Framework in which
to embed knowledge
Emergent
Systems
Phylogeny - basis for development:
•  Innate skills & core knowledge
•  Memories
•  Memories
•  Formalisms for learning
•  Formalism for autonomy
•  Programming mechanism
•  Formalism for development
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 34
Example Architectures
Surveys:
Biologically Inspired Cognitive Architectures Society,
Comparative Repository of Cognitive Architectures ,
http://bicasociety.org/cogarch/architectures.htm
(25 cognitive architectures)
A Survey of Cognitive and Agent Architectures,
University of Michigan, http://ai.eecs.umich.edu/cogarch0/
(12 cognitive architectures)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 35
Example Architectures
Surveys:
W. Duch, R. J. Oentaryo, and M. Pasquier.
“Cognitive Architectures: Where do we go from here?”,
Proc. Conf. Artificial General Intelligence, 122-136, 2008.
(17 cognitive architectures)
D. Vernon, G. Metta, and G. Sandini,
"A Survey of Artificial Cognitive Systems: Implications for the Autonomous
Development of Mental Capabilities in Computational Agents",
IEEE Transactions on Evolutionary Computation, Vol. 11, No. 2, pp. 151-180, 2007.
(14 cognitive architectures)
D. Vernon, C. von Hofsten, and L. Fadiga.
"A Roadmap for Cognitive Development in Humanoid Robots",
Cognitive Systems Monographs (COSMOS), Vol. 11, Springer
Chapter 5 and Appendix I
(20 cognitive architectures)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 36
Example Architectures
COGNITION
Cognitivist
Systems
Hybrid
Systems
Emergent
Systems
Soar [Newell 1996]
EPIC [Kieras & Meyer 1997]
ICARUS [Langley 05, Langley 2006]
GLAIR [Shapiro & Bona 2009]
CoSy [Hawes & Wyatt 2008]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 37
Example Architectures
COGNITION
Cognitivist
Systems
Hybrid
Systems
Emergent
Systems
CLARION [Sun 2007]
ACT-R [Anderson et al. 2004]
ACT-R/E [Trafton et al. 2013]
KHR [Burghart et al. 2005]
LIDA [Franklin et al. 2007, Baars & Franklin 2009]
PACO-PLUS [Kraft et al. 2008]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 38
Example Architectures
COGNITION
Cognitivist
Systems
Hybrid
Systems
Emergent
Systems
iCub [Vernon et al. 2010]
Global Workspace [Shanahan 2006]
SASE [Weng 2004]
Darwin [Krichmar et al. 2005]
Cognitive Affective [Morse et al 2008]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 39
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 40
Example Architectures
Soar [Newell 96]
• 
(sitemaker.umich.edu/soar)
• 
Newell’s candidate UTC
• 
1983 – 2005 … (v 8.5)
• 
Production system
• 
Cyclic operation
–  Production firing (all)
–  Decision (cf preferences)
• 
Fine-grained knowledge representation
• 
Universal sub-goaling (dealing with impasse)
• 
General-purpose learning (encapsulates resolution of impasse)
http://cogarch.org/index.php/Soar/Architecture
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 41
Soar
Laird, J.E., Newell, A., Rosenbloom, P.S.: Soar: an architecture for general
intelligence. Artificial Intelligence 33, 1–64 (1987)
Rosenbloom, P., Laird, J., Newell, A. (eds.): The Soar Papers: Research on
Integrated Intelligence. MIT Press, Cambridge (1993)
Lehman, J.F., Laird, J.E., Rosenbloom, P.S.: A gentle introduction to soar, an
architecture for human cognition. In: Sternberg, S., Scarborough, D. (eds.)
Invitation to Cognitive Science. Methods, Models, and Conceptual Issues, vol. 4.
MIT Press, Cambridge (1998)
Lewis, R.L.: Cognitive theory, Soar. In: International Encyclopedia of the Social and
Behavioural Sciences. Pergamon, Elsevier Science, Amsterdam (2001)
Laird, J. E. Towards Cognitive Robotics, Unmanned Systems Technology XI.
Edited by Gerhart, G. R., Gage, D. W., Shoemaker, C. M., Proceedings of the
SPIE, Volume 7332, pp. 73320Z-73320Z-11 (2009).
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 42
Example Architectures
• 
Newell’s candidate for a Unified Theory of Cognition
• 
Archetypal and iconic cognitivist cognitive architecture
• 
Production (or rule-based) system
– 
Production: effectively an IF-THEN condition-action pair
– 
A production system:
• 
• 
A set of production rules
A computational engine for interpreting or executing productions
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 43
Example Architectures
•  Behaviour as movement through problem spaces
–  Goal (circle)
–  Problem space: expanding set of possibilities that can unfold over time (triangle)
–  States (rectangles)
•  Vocabulary of features (bold)
•  Their possible values (italics) … values can also be a set of features
–  State transition (arrows) ... operators reflecting internal or external behaviour
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 44
Example Architectures
Tying the content to the architecture
Knowledge about
things in the world
Knowledge about
abstract ideas
Knowledge about
mental actions
Knowledge about
how to use knowledge
Knowledge about
Physical actions
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 45
Example Architectures
Soar continues to evolve … [Laird 09]
• 
Perception
• 
Action
• 
Mental imagery
(internal simulation)
• 
Procedural memory
& reinforcement learning
• 
Semantic memory
& learning
• 
Episodic memory
& learning
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 46
Example Architectures
EPIC [Kieras & Meyer 97]
• 
Executive Process Interactive Control
• 
Link high-fidelity
models of perception
and motor mechanisms
with a production system
• 
Only the timing!
• 
Knowledge in production rules
• 
Perceptual-motor parameters
• 
All processors run in parallel
• 
No learning
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 47
ACT-R 7.0 [Anderson et al. 04]
• 
• 
• 
• 
Adaptive Character of Thought [96]->
Adaptive Control of Thought-Rational [04]
Production system
Execute one production per cycle
• 
Arbitration
• 
Declarative memory
• 
Symbols (cf. Soar)
• 
Activation values
• 
Probability of reaching goal
• 
Time cost of firing
• 
Combined to find best trade-off
• 
Activation based on Bayesian analysis of
probability of invocation
• 
Learning (‘Rational Analysis’)
• 
Includes sub-symbolic: P(Goal), C(fire), Activation level, context association
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 48
Example Architectures
ICARUS [Langley 05, Langley 06]
• 
Cognition is grounded in
perception and action
• 
Concepts and skills are distinct
cognitive structures
• 
Skill and concept hierarchies are
acquired cumulatively
• 
Long-term memory is organized
hierarchically
• 
LT & ST structures have a strong
correspondence
• 
Symbolic cognitive structures are
modulated with numeric functions
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 49
Example Architectures
Shanahan’s Global Workspace
Architecture
[Shanahan06,ShanahanBaars06,Shanahan05a,Shanahan05b]
• 
Anticipation and planning achieved through
internal simulation
• 
Action selection (internal and external)
mediated by affect
• 
Analogical representation (-> small semantic
gap & easier grounding)
• 
Global workspace model: parallelism is a
fundamental component of the architecture,
not an implementation issue
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
SC Sensory Cortex
MC Motor Cortex
BG Basal Ganglia (action selection)
AC Association Cortex
Am Amygdala (affect)
Chapter 3, Slide 50
Example Architectures
Global workspace model: sequence of states emerge from multiple
competing and cooperating parallel processes
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 51
Example Architectures
Shanahan’s Global Workspace
Architecture
[Shanahan06,ShanahanBaars06,Shanahan05a,Shanahan05b]
• 
Implemented using G-RAMS (generalized
random access memories)
• 
Global workspace and cortical assemblies
define an attractor landscape
• 
Perceptual categories define attactors
• 
Higher-order loop allows the GW to visit
these attractors
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
SC Sensory Cortex
MC Motor Cortex
BG Basal Ganglia (action selection)
AC Association Cortex
Am Amygdala (affect)
Chapter 3, Slide 52
Example Architectures
AMD Autonomous Mental Development
[Weng et al. 01, Weng 02, Weng & Zhang 02, Weng 04a, Weng 04b]
Self-aware self-effecting (SASE) agent
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 53
Example Architectures
Review using 7 criteria:
1. 
Embodiment
2. 
Perception
3. 
Action
4. 
Anticipation
5. 
Adaptation
6. 
Motivation
7. 
Autonomy
[Vernon, von Hofsten, Fadiga 2010]
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 54
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 55
Example Architectures
COGNITION
Cognitivist
Systems
Hybrid
Systems
Emergent
Systems
Only GLAIR addresses autonomy
Only CoSy Architecture Schema addresses motivation
Only ADAPT makes any strong commitment to embodiment (cf. functionalism)
Only ICARUS addresses strong adaptation (development of new models)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 56
Example Architectures
COGNITION
Cognitivist
Systems
Hybrid
Systems
Emergent
Systems
Only LIDA, CLARION, PACO-PLUS
address adaptation in the developmental sense
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 57
Example Architectures
COGNITION
Cognitivist
Systems
Hybrid
Systems
Emergent
Systems
All address most of the 7 characteristics
IC-SDAL, SASE, and Cognitive-Affective address all 7
Only Global Workspace and Cognitive-Affective address anticipation in depth
Only SASE and Cognitive-Affective address adaptation in a strong manner
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 58
Some Architectures in More Depth
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 59
Soar
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 60
Soar
• 
Newell’s candidate for a Unified Theory of Cognition
• 
Archetypal and iconic cognitivist cognitive architecture
• 
Production (or rule-based) system
– 
Production: effectively an IF-THEN condition-action pair
– 
A production system:
• 
• 
A set of production rules
A computational engine for interpreting or executing productions
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 61
Soar
• 
Operates in a cyclic manner
– 
Production cycle
• 
All productions that match the contents of declarative (working) memory fire
– 
– 
• 
– 
A production that fires may alter the state of declarative memory
and cause other productions to fire
This continues until no more productions fire.
Decision cycle
• 
a single action from several possible actions is selected
• 
The selection is based on stored action preferences.
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 62
Soar
• 
For each decision cycle, may have been many production cycles
• 
Productions in Soar are low-level
– 
knowledge is encapsulated at a very small grain size
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 63
Soar
• 
Universal sub-goaling
– 
There no guarantee that the action preferences will lead to
• 
• 
– 
– 
a unique action or
any action
In this case, the decision cycle may lead to an ‘impasse’
• 
Soar sets up an new state in a new problem space — sub-goaling — with the goal of
resolving the impasse.
• 
Resolving one impasse may cause other impasses and the sub-goaling process
continues
It is assumed that degenerate cases can be dealt with
• 
e.g. if all else fails, choose randomly between two actions
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 64
Soar
• 
Whenever an impasse is resolved
– 
Soar creates a new production rule which summarizes the processing that
occurred in the sub-state in solving the sub-goal
– 
Resolving an impasse alters the system super-state
• 
This change is called a result
• 
It becomes the outcome of the production rule
• 
The condition for the production rule to fire is derived from a dependency analysis
– 
– 
finding what declarative memory items matched in the course of determining the result
This change in state is a form of learning
• 
• 
It is the only form that occurs in Soar
i.e. Soar only learns new production rules
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 65
Soar
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 66
Darwin
• 
Series of robot platforms focussed on developmental cognition
• 
Brain-based devices (BBDs)
– 
– 
– 
– 
Simulated nervous system
Develop spatial and episodic memory
Recognition capabilities
Autonomous experiential learning
• 
Neuromimetic: mimic the neural structure of the brain
• 
Differ from connectionist approaches: focus on
–  Nervous system as a whole
–  Constituent parts
–  Their interaction
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 67
Darwin
• 
Principal neural mechanisms of a BDD
– 
– 
– 
– 
– 
• 
Synaptic plasticity
Reward (i.e. value) system
Reentrant connectivity
Dynamic synchronization of neuronal activity
Neuronal units with spatiotemporal response properties
Adaptive behaviour
–  Interaction of these neural mechanisms with sensorimotor correlations
(contingencies) which have been learned autonomously through active sensing
and self-motion
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 68
Darwin
• 
Darwin VIII
–  Discriminates simple visual targets (coloured geometric shapes)
–  By associating them with an innately preferred auditory cue
–  Its simulated nervous system contains
•  28 neural areas
•  approximately 54,000 neuronal units
•  approximately 1.7 million synaptic connections
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 69
Darwin
• 
Darwin IX
–  Navigates and categorizes textures using artificial whiskers
–  Based on a simulated neuroanatomy of the rat somatosensory system
–  Its simulated nervous system contains
•  17 areas
•  1101 neuronal units
•  approximately 8400 synaptic connections
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 70
Darwin
• 
Darwin X
–  Develops spatial and episodic memory based on a model of the hippocampus
and surrounding regions
–  Its simulated nervous system contains
•  50 areas
•  90,000 neuronal units
•  1.4 million synaptic connections
–  Systems
• 
Visual system (object recognition, localization)
• 
• 
• 
• 
• 
Head direction system
Hippocampal formation
Basal forebrain
Value/reward system based on dopaminegic function
Action selection system
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 71
ISAC
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 72
ISAC
K. Kawamura, S. M. Gordon, P. Ratanaswasd, E. Erdemir, and J.
F. Hall. Implementation of cognitive control for a humanoid robot.
International Journal of Humanoid Robotics, 5(4):547–586, 2008.
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 73
ISAC
•  ISAC — Intelligent Soft Arm Control
–  Hybrid cognitive architecture for an upper torso humanoid robot (also
called ISAC)
–  Constructed from an integrated collection of software agents and
associated memories
–  Agents encapsulate all aspects of a component of the architecture
–  Agents operate asynchonously and communicate with each other by
passing messages
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 74
ISAC
•  Comprises activator agents for
–  motion control
–  perceptual agents
–  a First-order Response Agent (FRA) to effect reactive perception-action
control
•  Three memory systems
–  Short-term memory (STM)
–  Long-term memory (LTM)
–  Working memory system (WMS)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 75
ISAC
•  STM
–  Robot-centred spatio-temporal memory of the perceptual events currently
being experienced
–  This is called a Sensory EgoSphere (SES)
–  Discrete representation of what is happening around the robot,
represented by a geodesic sphere indexed by two angles: horizontal
(azimuth) and vertical (elevation).
–  STM also has an attentional network that determines the perceptual events
that are most relevant and then directs the robot’s attention to them
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 76
ISAC
•  LTM
–  Stores information about the robot’s learned skills and past experiences
–  Semantic memory
–  Episodic memory
–  Procedural memory
robot’s declarative memory of the facts it
knows
representations of the motions it can perform
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 77
ISAC
•  Episodic memory
–  Abstracts past experiences & creates links or associations between them
–  Information about
• 
• 
• 
• 
• 
• 
External situation (i.e. task-relevant percepts from the SES)
Goals
Emotions (internal evaluation of the perceived situation)
Actions
Outcomes that arise from actions
Valuations of these outcomes (e.g. how close they are to the desired goal state
and any reward received at a result)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 78
ISAC
•  Episodic memory
–  Episodes are connected by links that encapsulate behaviours
•  Transitions from one episode to another
–  Multi-layered
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 79
ISAC
•  WMS
–  Inspired by neuroscience models of brain function
–  Temporarily stores information that is related to the task currently being
executed
–  A type of cache memory for STM and the information it stores, called
chunks
–  Encapsulates expectations of future reward (learned using a neural
network)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 80
ISAC
•  Cognitive behaviour is achieved through the interaction of several
agents
–  Central Executive Agent (CEA)
–  Internal Rehearsal System (simulates the effects of possible actions)
–  Goals & Motivation sub-system
•  Intention Agent
•  Affect Agent
–  the CEA and Internal Rehearsal System form a compound agent called the
Self Agent
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 81
ISAC
•  Cognitive behaviour is achieved through the interaction of several
agents
–  The CEA is responsible for cognitive control
–  Invokes the skills required to perform some given task on the basis of the
current focus of attention and past experiences
–  The goals are provided by the Intention Agent
–  Decision-making is modulated by the Affect Agent
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 82
ISAC
•  ISAC works the following way
–  Normally, the First-order Response Agent (FRA) produces reactive
responses to sensory triggers
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 83
ISAC
•  ISAC works the following way
–  Normally, the First-order Response Agent (FRA) produces reactive
responses to sensory triggers
–  However, it is also responsible for executing tasks
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 84
ISAC
•  ISAC works the following way
–  Normally, the First-order Response Agent (FRA) produces reactive
responses to sensory triggers
–  However, it is also responsible for executing tasks
–  When a task is assigned by a human, the FRA retrieves the skill from
procedural memory in LTM that corresponds to the skill described in the
task information
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 85
ISAC
•  ISAC works the following way
–  Normally, the First-order Response Agent (FRA) produces reactive
responses to sensory triggers
–  However, it is also responsible for executing tasks
–  When a task is assigned by a human, the FRA retrieves the skill from
procedural memory in LTM that corresponds to the skill described in the
task information
–  It then places it in the WMS as chunks along with the current percept
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 86
ISAC
•  ISAC works the following way
–  Normally, the First-order Response Agent (FRA) produces reactive
responses to sensory triggers
–  However, it is also responsible for executing tasks
–  When a task is assigned by a human, the FRA retrieves the skill from
procedural memory in LTM that corresponds to the skill described in the
task information
–  It then places it in the WMS as chunks along with the current percept
–  The Activator Agent then executes it, suspending execution whenever a
reactive response is required
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 87
ISAC
•  ISAC works the following way
–  If the FRA finds no matching skill for the task, the Central Executive
Agent takes over
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 88
ISAC
•  ISAC works the following way
–  If the FRA finds no matching skill for the task, the Central Executive
Agent takes over
–  Recalls from episodic memory past experiences and behaviours that
contain information similar to the current task
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 89
ISAC
•  ISAC works the following way
–  If the FRA finds no matching skill for the task, the Central Executive
Agent takes over
–  Recalls from episodic memory past experiences and behaviours that
contain information similar to the current task
–  One behaviour-percept pair is selected, based on the current percept
in the SES, its relevance, and the likelihood of successful execution as
determined by internal simulation in the IRS
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 90
ISAC
•  ISAC works the following way
–  If the FRA finds no matching skill for the task, the Central Executive
Agent takes over
–  Recalls from episodic memory past experiences and behaviours that
contain information similar to the current task
–  One behaviour-percept pair is selected, based on the current percept
in the SES, its relevance, and the likelihood of successful execution as
determined by internal simulation in the IRS
–  This is then placed in working memory and the Activator Agent
executes the action
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 91
Caveat
“In the stream of cognitive processes one can
conceptually isolate certain components, for instance
(i) the faculty to perceive,
(ii) the faculty to remember, and
(iii) the faculty to infer.
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 112
“But if one wishes to isolate these faculties functionally
or locally, one is doomed to fail.
Consequently, if the mechanisms that are responsible
for any of these faculties are to be discovered, then
the totality of cognitive processes must be considered.”
Heinz von Foerster
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 113
Recommended Reading
Vernon, D., Metta. G., and Sandini, G. “A Survey of Artificial Cognitive Systems:
Implications for the Autonomous Development of Mental Capabilities in
Computational Agents”, IEEE Transactions on Evolutionary Computation, special
issue on Autonomous Mental Development, Vol. 11, No. 2, pp. 151-180 (2007).
Vernon, D., von Hofsten, C., and Fadiga, L. A Roadmap for Cognitive Development in
Humanoid Robots, Cognitive Systems Monographs (COSMOS), Springer, ISBN
978-3-642-16903-8 (2010); Chapter 5 and Appendix A
Lehman, J.F., Laird, J.E., Rosenbloom, P.S.: A Gentle Introduction to Soar, an
Architecture for Human Cognition. In: Sternberg, S., Scarborough, D. (eds.)
Invitation to Cognitive Science. Methods, Models, and Conceptual Issues, vol. 4.
MIT Press, Cambridge (1998)
David Vernon, Artificial Cognitive Systems – A Primer, MIT Press, 2014
Chapter 3, Slide 114
×

Report this document